<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"  xmlns:media="http://search.yahoo.com/mrss/">
<channel>
  <title>Insights</title>
  <link>https://nearform.estd.dev/insights</link>
  <description>

Analysis, advice and inspiration. Sharing what we’ve learned with each other, our clients and our coding community expands our collective capabilities. Knowledge empowers.
</description>
  <language>en-GB</language>
  <atom:link href="https://nearform.estd.dev/feed/" rel="self" type="application/rss+xml" />
      <item>
        <title><![CDATA[Top tips from Nearform’s website performance optimisation checklist]]></title>
        <link>https://nearform.estd.dev/digital-community/top-tips-from-nearform-s-website-performance-optimisation-checklist</link>
        <guid>https://nearform.estd.dev/@/page/60554778-63f4-43aa-b58f-5f4016029745</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 23 Aug 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/top-tips-from-nearform-s-website-performance-optimisation-checklist/e7c60d0893-1725542994/top-tips-from-nearform-s-website-performance-optimisation-checklist-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

We share essential steps every developer should follow to ensure their website performs at its best, with specific insights for Next.js
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>In today's fast-paced digital landscape, the need for top speed and performance for your website has never been more critical. Whether it's to capture user attention, improve search engine optimisation (SEO) or simply provide a seamless browsing experience, performance optimisation is a priority for website developers across all stacks.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>At Nearform we've had the privilege of working on a variety of projects, ranging from building apps from the ground up to critical interventions in ongoing developments. This extensive experience has given us deep insights into performance optimisation. We frequently collaborate with clients who demand exceptional scalability and performance, and our proven strategies have consistently delivered outstanding results.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Website performance is a fundamental aspect of user satisfaction and engagement</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Lately, many developers have made Next.js their web development framework of choice. This is due to its ability to enhance SEO and improve search engine rankings, driving more traffic and engagement to their website.</p><p>The rise in popularity of Next.js can be attributed in part to its robust performance capabilities. With built-in server-side rendering (SSR) and static site generation (SSG), Next.js empowers developers to create blazing-fast web applications without compromising on functionality or user experience.</p><p>However, even the most powerful tools require fine-tuning to unleash their full potential. This guide will cover general optimisation techniques, with a specific focus on Next.js, to help you achieve superior performance and better rankings.</p><p>As we dive into the details of optimising Next.js website performance, it's essential to recognise that speed is not just a technical concern — it's a fundamental aspect of user satisfaction and engagement. Users expect websites to load quickly and seamlessly, regardless of the underlying technology stack, these are some of the things to be aware of:</p><ul><li><p><a href="https://blog.kissmetrics.com/wp-content/uploads/2011/04/loading-time.pdf" target="_blank">47% of users</a> expect a page to load in under 2 seconds</p></li><li><p>100ms in latency <a href="https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-latency-cost-them-1-in-sales" target="_blank">costs 1% in sales</a></p></li><li><p>Load times over 1s for a mobile site <a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-new-industry-benchmarks/" target="_blank">increase visitor bounce rate by 90%</a></p></li></ul><p>While our focus is on Next.js, it's worth noting that many of the performance optimisation techniques we'll discuss can be applied across various technology stacks — irrespective of the framework, the principles of performance optimisation remain largely consistent.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Quick tips from Nearform’s website optimisation checklist</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Drawing from our past experiences, we've developed a checklist that can serve as a go-to guide for website optimisation. This checklist encapsulates some of the essential steps every developer should follow to ensure their website performs at its best. While we include general optimisation techniques, we also provide specific insights for Next.js.</p><p>Here are some quick tips from our checklist to get you started, and remember, our team at Nearform is always ready to provide expert guidance tailored to your specific needs.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Must-do: Set up effective monitoring</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Before delving into the intricacies of performance optimisation, it's essential to establish a robust monitoring system tailored to your application. Effective monitoring not only allows you to track the impact of optimisations over time but also facilitates the identification of potential bottlenecks, empowering you to make informed decisions for further performance enhancements.</p><p>For real-time monitoring and visualisation, leveraging tools like Prometheus and Grafana is paramount. Prometheus excels in collecting metrics from both your application and infrastructure, while Grafana offers comprehensive visualisation and analysis capabilities. Together, they enable you to scrutinise crucial performance indicators and help with identifying performance bottlenecks and resource constraints, paving the way for targeted optimisations.</p><p>In addition to real-time monitoring, investing in performance monitoring tools like NewRelic or Datadog provides invaluable insights into your application's performance at a deeper level. By analysing this data, you can fine-tune your application's performance, ensuring seamless user experiences across various scenarios.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Vital check: Perfect your caching setup</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Caching is one of the most effective ways to improve the performance of your web application. When implementing caching, it's essential to identify what resources and data should be cached, how long should those be cached for and when the cache should be refreshed.</p><p>When dealing with large volumes of data, slow performance can be a common issue. Even fetching cached data can unexpectedly take a significant amount of time. Our experience reveals the efficacy of compressing data (utilising formats like gzip or brotli) before caching, as it can notably enhance load times. It's worth noting that certain caching solutions offer this compression option, presenting a potential game-changer. Keeping a vigilant eye on such features could yield significant performance improvements.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Necessary validation: Confirm connection reuse</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Connection reuse is a crucial strategy for optimising the performance of web applications. By reusing existing connections, instead of establishing new ones for each request, you can reduce latency, and improve throughput and the overall speed of your application. This is particularly important for applications that make numerous HTTP requests or database queries.</p><p>Reusing connections eliminates the need for repeated Transmission Control Protocol (TCP) handshakes, reducing the time it takes to establish a connection and start data transfer while also reducing the computational and memory overhead associated with establishing and tearing down connections, leading to more efficient resource utilisation.</p><p>Ensure that your web server is configured to support Keep-Alive connections. If your application makes frequent database queries, consider using connection pooling to reuse database connections. When making HTTP requests from your Next.js application, opt for HTTP clients that support Keep-Alive. For instance, Node.js's HTTP and HTTPS modules inherently support Keep-Alive, while some third-party libraries, such as axios, may require explicit configuration for this feature. Set an appropriate timeout value to strike a balance between reusing connections efficiently and avoiding the risks of keeping idle connections open for too long.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Optimise your CDN strategy</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Content Delivery Networks (CDNs) play a vital role in improving the performance and reliability of web applications by distributing content closer to users.</p><p>Select a CDN provider that suits your needs. Popular options include Cloudflare, AWS CloudFront, Akamai and Fastly. Each provider offers different features and pricing models, so choose one that aligns with your performance and budget requirements. Enable compression on your CDN to further reduce the size of transmitted data. Most CDNs support Gzip and Brotli compression. Use HTTP headers to control how and when your assets are cached.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Data efficiency: Optimise <code>__NEXT_DATA__</code></h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The <code>__NEXT_DATA__</code> object is a global variable injected by Next.js into every page on the client side to help with faster rendering and it can grow in size extremely fast (<a href="https://nextjs.org/docs/pages/building-your-application/data-fetching" target="_blank">read more</a>).</p><p>Often times in cases when Next.js apps have performance issues, <code>__NEXT_DATA__</code> is a good place to focus for performance improvements. While the <code>__NEXT_DATA__</code> object in Next.js is a powerful tool, misusing its capabilities can lead to serious performance penalties. The key thing here is to only keep crucial and essential data about the page in this object.</p><p>Try to review the data and remove any piece of data that is not needed for the initial rendering. We’ve seen huge improvements just by double-checking for duplicate data or removing certain properties that weren’t needed initially (and you definitely have some of those).</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Picture perfect: Ensuring ongoing image optimisation</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p><a href="https://developer.chrome.com/docs/lighthouse/overview" target="_blank">Lighthouse</a>, is an open-source automated tool for measuring website performance and quality. It often flags images as significant performance issues, especially on mobile. Common problems include improperly sized and unoptimised images. To address these issues, the Next.js next/image component is highly effective:</p><ul><li><p>Automatic optimisation: Images are optimised based on device characteristics, reducing bandwidth and improving Cumulative Layout Shift (CLS) metrics.</p></li><li><p>Lazy loading: Images load as they enter the viewport, enhancing initial page load times.</p></li><li><p>Modern formats: Supports WebP and AVIF formats, which offer better compression and quality.</p></li><li><p>Responsive images: Automatically selects appropriate sizes for various screen sizes.</p></li><li><p>Placeholders: Provides visual feedback during loading, improving perceived performance.</p></li></ul></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Future-proof: Maintaining the latest application in Next.js</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Maintaining an up-to-date Next.js application is crucial for ensuring optimal performance. Regular updates bring numerous benefits, including performance enhancements, security patches, and new features.</p><p>Each Next.js release often includes optimisations that enhance performance. These updates can reduce build times, improve server-side rendering speed, and enhance client-side performance.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Cut the clutter: Removing unused JS/CSS</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Eliminating unused JavaScript (JS) and CSS is a key strategy for optimising the performance of your Next.js application. Removing unused code reduces the overall size of your application, leading to quicker load times and a more responsive user experience.</p><p>Metrics such as First Contentful Paint (FCP), Time to Interactive (TTI), and Largest Contentful Paint (LCP) improve when there is less code to parse, compile, and execute. Smaller files are more efficient to cache, making repeated visits faster and reducing server load.</p><p>What can you do? Analyse your bundle and get rid of unnecessary code; make sure the project is set up to use tree shaking, dynamic imports can help in certain situations.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Building performance: piece by piece</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Every project is unique and fine-tuning an application’s performance will be unique to that application. The above list serves as a guide to follow to shave off some big wins when it comes to performance tuning. The most important thing is to have a strategy and improve the performance piece by piece.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[How to create a GenAI agent using Semantic Kernel]]></title>
        <link>https://nearform.estd.dev/digital-community/how-to-create-a-genai-agent-using-semantic-kernel</link>
        <guid>https://nearform.estd.dev/@/page/fec91398-c786-4867-8ce2-7617d07d2dbf</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 09 Aug 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/how-to-create-a-genai-agent-using-semantic-kernel/86cf5a589e-1725542994/how-to-create-a-genai-agent-using-semantic-kernel-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

Our example sets an AI agent the aim of retrieving, calculating and plotting financial data
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>In this article, we explore what <a href="https://github.com/microsoft/semantic-kernel">Microsoft's </a><a href="https://github.com/microsoft/semantic-kernel" target="_blank">Semantic</a><a href="https://github.com/microsoft/semantic-kernel"> Kernel</a> is, what the advantages of using it are, and provide a simple example of how to use Semantic Kernel to create an AI agent to accomplish the task of displaying different value graphs for a stock in a specific time period.</p><p>This provides a reasonably complex example that uses external data sources and custom data formatting.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">What is Semantic Kernel?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Microsoft’s Semantic Kernel is a software development kit (SDK) that integrates Large Language Models (LLMs) with programming languages like C#, Python, and Java. The SDK does this by allowing us to define programming functions as plugins that can be easily chained together in a few lines of code.</p><p>Another major feature of Semantic Kernel is its ability to automatically orchestrate (or coordinate) plugins using AI. Simply provide the Semantic Kernel with an LLM and the plugins you need, then create a planner. When you ask the planner to achieve a specific goal, the Semantic Kernel will execute the plan for you.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">What is an AI agent?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>An AI agent is a software program designed to perform tasks autonomously to achieve predetermined goals. It interacts with its environment (such as code functions, APIs, and data sources) and independently determines the best tools and actions to use for each task. AI agents can adapt to changes in their environment, learn from experiences, and improve their performance over time.</p><p>In summary, the user provides the goal which the AI should achieve and the AI will determine and execute the best approach to achieving that result.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">AI agent: Financial stock plotter</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>We utilised Semantic Kernel to create an AI agent. The aim of the agent is to retrieve financial data, calculate the drawdown of the data, and finally plot both the data and the drawdown.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/how-to-create-a-genai-agent-using-semantic-kernel/23d550ac79-1725542994/how-to-create-a-genai-agent-using-semantic-kernel-semantic-20kernel-20ai-20agent-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="GenAI agent financial tracker" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>The above diagram shows how the agent communicates with the required APIs and functions to perform the desired tasks. A step-by-step explanation is available below:</p><p>1. The user asks a question via the terminal: <code>User: give me details on Apple over the last 6 months</code></p><p>2. The agent will process the query to retrieve the required fields from the user’s query:</p><p>a. <code>ticker_name: AAPL</code>   </p><p>b. <code>period: 6mo</code></p><p>3. The agent will call a function to retrieve the historical data from the Yahoo finance API. This data includes a variety of information on the stock over the period provided. The values we are interested in are: <code>date</code> and <code>close</code></p><p>a.<strong> date:</strong> the day the stock values were at the level indicated</p><p>b.<strong> close</strong>: the value of the stock at the time of closing of the financial markets on that day</p><p>4. The agent will call a function to calculate the drawdown</p><p>a. The <a href="https://en.wikipedia.org/wiki/Drawdown_(economics)" target="_blank">drawdown</a> is a financial measure that looks at the price of a stock at a certain point in time, gets the previous max value the stock held, and subtracts the max from the current value. The drawdown is a negative score as the current value is never greater than the cumulative max value up to the date we are looking at. The closer the drawdown is to zero, the closer the stock is to its maximum value.</p><p>5. The agent will then generate dynamic Python code to plot the data in a valid format using <a href="https://pandas.pydata.org/" target="_blank">pandas</a> and <a href="https://matplotlib.org/" target="_blank">matplotlib</a>.</p><p>6. Finally, the agent will refactor the code to attach the data calculated earlier on to the code just generated, then execute it</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/how-to-create-a-genai-agent-using-semantic-kernel/983ef41574-1725542994/how-to-create-a-genai-agent-using-semantic-kernel-aapl-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="AAPL" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Creating an agent</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>At the time of writing, Semantic Kernel allows four types of agents, called planners.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Sequential planner</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Sequential planner takes a goal (prompt) and runs through the plugins provided to the agent at creation time, in the order requested by the goal. The loop will continue until the goal is complete. This is the agent we used to accomplish our goal.</p><p>The process our planner follows is shown below:</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/how-to-create-a-genai-agent-using-semantic-kernel/5186dc236e-1725542994/how-to-create-a-genai-agent-using-semantic-kernel-user-query-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="User query" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Stepwise planner</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>An agent that will run through the user request and process it step by step. It differs from the sequential planner by only running a specific agent against a part of the query. This allows the agent to break down the query into individual parts and run those against the plugin.</p><p>An example is if we asked “What would be the temperature in New York if we added 10 degrees to the current temperature?” the stepwise planner will be able to split the weather API call and the math problem from the query and run them separately, giving answers based on them.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Handlebar planner</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The main advantage of the handlebar planner over the stepwise planner is the usage of the handlebar templating language to generate the plan. This is due to the majority of LLMs already supporting the handlebar interface when dealing with prompts, which helps provide improve the accuracy of the response (in this case the agent plan). In opposition to the stepwise planner that uses XML when generating the plan. Using this language we can also provide loops and conditions that otherwise would only be available in coding languages.</p><p>Another advantage of this planner is the fact that the return value of the serialised function <code>planner.getPlan</code> is a readable text file that can be stored and reloaded at a later date. However, this planner is not available in the Python version of the kernel, as of the time of writing.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Custom planner</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Semantic Kernel offers us the ability to create our agent with custom rules. This involves more complexity but also gives more freedom to the developer to create the agent in the way they intend. It is particularly useful when you want human intervention during the agent planning stages, for example, to provide new variables or access custom data sources.</p><p>At the time of writing, the team behind Semantic Kernel intends to deprecate the Handlebar and Stepwise planners, in favour of this approach.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Our agent</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The below code demonstrates how we created our agent using the Sequential planner:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>python</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d773">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d773" class="language-python">from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAIChatPromptExecutionSettings
from semantic_kernel.functions import KernelArguments
from semantic_kernel.connectors.ai.function_call_behavior import FunctionCallBehavior
from plugins import CodeRefactor, FinancePlugi
from semantic_kernel.planners import SequentialPlanner

# Create the Kernel Instance
kernel = Kernel()

execution_settings = OpenAIChatPromptExecutionSettings(
    service_id=&quot;chat&quot;
)

# Configure AI service used by the kernel
kernel.add_service(
    OpenAIChatCompletion(service_id=service_id,
    api_key=open_ai_key,
    ai_model_id=ai_model_id),
)

# Adding our custom plugins to the kernel
kernel.add_plugin(parent_directory=&quot;./plugins&quot;, plugin_name=&quot;ChatPlugin&quot;)
kernel.add_plugin(plugin=FinancePlugin(), plugin_name=&quot;FinancePlugin&quot;)
kernel.add_plugin(parent_directory=&quot;./plugins&quot;, plugin_name=&quot;ChartPlugin&quot;)
kernel.add_plugin(plugin=CodeRefactor(), plugin_name=&quot;CodePlugin&quot;)

# Define the details to be passed to the planner
arguments = KernelArguments(settings=execution_settings)
arguments[&quot;user_input&quot;]= &quot;Show me the drawdown for Apple stock&quot;

goal = &#039;&#039;&#039;Based on the user_input argument,chat with the AI to get
					the ticker_name and period if available,
           extract the data for a ticker over time,
           create a drawdown, plot the chart, 
           refactor the code to include the drawdown data.
       &#039;&#039;&#039;
       
# Create the plan
plan = await planner.create_plan(goal = goal)

# Execute the plan
result = await plan.invoke(kernel=kernel, arguments=arguments)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Steps to create an AI agent</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<ol><li><p><strong>Create the kernel Instance</strong>: This is the starting point to map our planner to the plugins and LLM. We can generate more than one planner with the same kernel, allowing for parallel agents if needed.</p></li><li><p><strong>Configure AI service used by the kernel</strong>: This allows the tool to interpret user input into the right parameters and ensure the right tool is called at the right time to accomplish the goal.</p></li><li><p><strong>Adding our custom plugins to the kernel</strong>: This enables non-standard LLM calls to be usable by our agent</p></li><li><p><strong>Define the details to be passed to the planner</strong>: These include standard LLM settings (temperature, token limit, …) and function handling instructions.</p></li><li><p><strong>Create the Plan</strong>: Define the “System Message” for the planner as the goal. This instructs the LLM about the behaviour we expect our agent to have. The goal should be straightforward and use similar terminology available in the tools' descriptions.</p></li><li><p><strong>Execute the plan:</strong> Connects the details defined in step 4 to the plan and triggers the invocation step for the plan to start.</p></li></ol></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Plugins</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>A Semantic Kernel plugin is a software component designed to extend the LLM's functionality by either granting the LLM capabilities that it did not previously have (code plugins) or defining the purpose of the LLM invocation to a specific topic (prompt plugins).</p><p>In contrast to tools supported natively by the LLM (such as function invocation), a plugin is not attached directly to the model but is placed within the planner. The planner will then generate a step-by-step approach and see if any of the steps of the goal are associated with the plugin and will call the function in question.</p><p>The main drawback of plugins is the requirement for a string response from the functions, as structured data is easier to manipulate in code and we don’t have to use resources ‘stringifying’ and parsing all our content. This is due to Semantic Kernel not being aware of the planner's plan before it is in action, making it so that all responses need to be compatible with an LLM call, which defaults to text.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Code plugins</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Code plugins expand functionality by granting the LLM new “skills” it did not have access to prior.</p><p>The function is a standard Python function. To transform this to a code input we added the function decorator called <code>kernel_function</code><strong>.</strong> This decorator expects 2 arguments:</p><ul><li><p><strong>name,</strong> which is the name of the function to be referenced when attaching the plugin.</p></li><li><p><strong>description,</strong> which is a string value that indicates what the intention of the function is. This helps the planner identify when it needs to call this function and when it should avoid it.</p></li></ul><p>The two main use cases for code plugins are the data based on the developer’s intention, or accessing external APIs. We demonstrate both abilities below:</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p><strong>Accessing an external API</strong></p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>python</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d812">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d812" class="language-python">import yfinance as yf
import pandas as pd
from semantic_kernel.functions.kernel_function_decorator import kernel_function

# defines this function as a plugin function
@kernel_function(
  name=&quot;financial_info&quot;,
  description=&quot;gets the historic details for a stock&quot;)
def financial_info(
  self,
  TICKER_AND_PERIOD):
 
  if not TICKER_AND_PERIOD:
     raise Exception(&quot;No ticker value provided&quot;)
     
  args = json.loads(TICKER_AND_PERIOD)
  ticker_name =  args[&quot;ticker_name&quot;]
  period =  args[&quot;period&quot;]
  
  if ticker_name is not None:    
     ticker = yf.Ticker(ticker_name)
     time = period if period is not None else &#039;1y&#039;
     data = ticker.history(period = time)
     data.index = data.index.strftime(&#039;%Y-%m-%d&#039;)
     data[&quot;Close&quot;] = data[&quot;Close&quot;].round(4)
     
     print(&#039;data retrieval complete&#039;)
     
     return data.to_json()</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>As shown in the code snippet above, we want to retrieve the historical data for a specific ticker using the Yahoo Finance API.</p><p>We created the base function for it when we instantiate a Ticker based on the input. Notice that the model is able to convert the string input to the required variables, before calling the plugins.</p><p>Then, we use the historical data to get the closing prices for the period specified. Finally, we manipulate the pandas DataFrame to format the style of the columns <code>Close</code> and the <code>date</code> index as these will be used in future steps of the process.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p><strong>Processing data</strong></p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>python</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d849">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d849" class="language-python">import yfinance as yf
import pandas as pd
from semantic_kernel.functions.kernel_function_decorator import kernel_function

@kernel_function(name=&quot;drawdown&quot;, description=&quot;Calculate the drawdown values for the stock&quot;)
def drawdown(self, data_json: Annotated[str, &quot;the data from the stock&quot;]):
   data =  pd.read_json(StringIO(data_json))
	 
   data[&#039;Peak&#039;] = data[&#039;Close&#039;].cummax()
   data[&#039;Drawdown&#039;] = data[&#039;Close&#039;] - data[&#039;Peak&#039;]
   data = data.drop(columns=[&quot;Peak&quot;])
   
   print(&#039;drawdown calculated&#039;)
   
   return str(data.to_json(orient=&#039;split&#039;))</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>We want to calculate the drawdown for the data based on the formula: <code>drawdown = current - cummax</code>. Pandas provides us with the functionality required to perform this calculation.</p><p>Our original data looked like this:</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th>Date</th>
                  <th>Close</th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td>2024-01-01</td>
                      <td>102.01</td>
                  </tr>
              <tr>
                      <td>2024-01-02</td>
                      <td>101.02</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>After the function run our data looks like this:</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th>Date</th>
                  <th>Close</th>
                  <th>Drawdown</th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td>2024-01-01</td>
                      <td>102.02</td>
                      <td>0</td>
                  </tr>
              <tr>
                      <td>2024-01-02</td>
                      <td>101.01</td>
                      <td>-1.01</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Now that we have our data available, we will be moving to a prompt plugin to generate dynamic code.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Prompt plugins</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Prompt plugins are used to alter the original intention of the LLM being invoked by providing it with new instructions. The plugin files are a prompt file and a JSON file. The prompt is the instructions that the LLM must abide by. The JSON file contains the LLM values to take into account.</p><p>The content of these files looks like this:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>Markdown</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d8f3">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d8f3" class="language-text">You are a chatbox that will retrieve the ticker_name and the period
(if provided) for the company in input in a json format

Example:
    query: Give me details on Apple over the last year
    result: ticker_name = NVDA, period= 1y

    query: Give me details on Nvidia
    result: ticker_name = NVDA, period= null

    query: What colour is the sky
    result: No ticker available

input={{$user_input}}</code></pre>
</figure>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>json</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d907">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d907" class="language-json">{
  &quot;schema&quot;: 1,
  &quot;description&quot;: &quot;From the data generate code that will map the data over a time period&quot;,
  &quot;execution_settings&quot;: {
    &quot;default&quot;: {
      &quot;max_tokens&quot;:4096,
      &quot;temperature&quot;: 0.1,
      &quot;stop_sequences&quot;: [
        &quot;[Done]&quot;
      ]
    }
  }
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The keyword filenames <code>skprompt</code> and <code>config</code> will let the planner know that these are prompt plugins. The prompt file will contain a straight prompt instruction similar to an LLM Model system message, which provides instructions to the model.</p><p>The config file will determine the <code>kwargs</code> information for the prompt. This is the equivalent of writing the below code but with the correct mappings, and the input required based on the part of the goal this step is aiming to achieve:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>python</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1d92c">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1d92c" class="language-python">f = open(&#039;skprompt.txt&#039;, &#039;r&#039;)
response = client.chat.completions.create(
  model=current_model,
  messages=[
     {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: f.read()},
     {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;user_input&quot;: user_input}
  ],
  temperature=0.1,
  max_tokens=4096,
  stop=stop_sequence,
)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The plugins are then attached to the kernel instance before we instantiate the planner. Hence, any planners we may have will benefit from the plugins provided.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Conclusion</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Various tools are available to create GenAI agents for specific tasks, each with its strengths and weaknesses. Semantic Kernel is an interesting tool for creating straightforward agents that perform relatively well for simple tasks.</p><p>As illustrated above, we can use Semantic Kernel to quickly create AI agents that can retrieve and modify data as well as perform LLM requests to accomplish a predetermined goal. Semantic Kernel is capable of utilising the LLM to create the best approach to a task and which plugins to consume to ensure the task is completed.</p><p>The two main advantages explored in this article are Plugins and Planners. Plugins are versatile and powerful and require minimal alteration to functions for code plugins, and standard text values for prompt plugins. The variety of the planners and the ease of creation makes them very reliable for different types of agents.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Streamline your mobile testing with Maestro]]></title>
        <link>https://nearform.estd.dev/digital-community/streamline-your-mobile-testing-with-maestro</link>
        <guid>https://nearform.estd.dev/@/page/dbb45cbb-506b-464f-93c0-5ea15d5ccfb9</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 02 Aug 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/streamline-your-mobile-testing-with-maestro/c2ee5fb1ef-1725542994/blog-streamline-your-mobile-testing-with-maestro-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

Comparing Maestro with other popular tools for mobile UI automation
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>In the ever-evolving landscape of mobile app development, efficient and effective testing is crucial. Mobile testing ensures that applications work seamlessly across different devices and platforms. By choosing the right tools, teams can significantly enhance their productivity and the quality of their applications.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>This article explores the features of <a href="https://maestro.mobile.dev/" target="_blank">Maestro</a>, a promising tool for mobile UI automation, and compares it with other popular tools such as <a href="https://docs.robotframework.org/docs/different_libraries/appium" target="_blank">Robot Framework Appium Library</a> and <a href="https://webdriver.io/" target="_blank">WebdriverIO</a>.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Real-world application of Maestro</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>A common challenge in mobile UI testing is managing the complexity of test setups and ensuring consistent results across different environments. Maestro offers a solution by enabling users to perform tests using a simple YAML syntax, which can be easily understood and written by anyone in the organisation. This flexibility allows teams to quickly adapt to changes and make reliable decisions promptly.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>A simple Maestro test case</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Here is an example of how a Maestro test case is structured using YAML:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>yaml</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1ddc7">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1ddc7" class="language-yaml">appId: org.wikipedia
---
- tapOn: &quot;ADD OR EDIT.*&quot;
- tapOn: &quot;ADD LANGUAGE&quot;
- tapOn:
    id: &quot;.*menu_search_language&quot;
- inputText: &quot;French&quot;
- assertVisible: &quot;Fran&ccedil;ais&quot;
- tapOn: &quot;Fran&ccedil;ais&quot;
- tapOn: &quot;Back&quot;</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>This example demonstrates a simple yet comprehensive test case for a hypothetical Wikipedia application:</p><p><strong>1. appId</strong>: Specifies the ID of the application to be tested.</p><p><strong>2. tapOn: "ADD OR EDIT.*"</strong>: Simulates a tap on any element whose text matches the pattern "ADD OR EDIT".</p><p><strong>3. tapOn: "ADD LANGUAGE"</strong>: Simulates a tap on the "ADD LANGUAGE" button.</p><p><strong>4. tapOn (with id)</strong>: Simulates a tap on an element whose ID matches the pattern "menu_search_language".</p><p><strong>5. inputText: "French"</strong>: Inputs the text "French" into the input field.</p><p><strong>6. assertVisible: "Français"</strong>: Verifies that the text "Français" is visible on the screen.</p><p><strong>7. tapOn: "Français"</strong>: Simulates a tap on the "Français" option.</p><p><strong>8. tapOn: "Back"</strong>: Simulates a tap on the "Back" button to return to the previous screen.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Localised testing challenges</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Running tests in different languages can introduce complications, and this is true for any testing approach. With Maestro, this issue arises when the simulator's language settings do not match the language expected by the test scripts. For instance, if your simulator is set to Italian and your tests are designed for an English interface, the tests will fail because the locators (which depend on text strings) will not match.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/streamline-your-mobile-testing-with-maestro/b37fd5a84b-1725542994/blog-streamline-your-mobile-testing-with-maestro-a-running-test-with-maestro-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>To address this, Maestro provides the ability to set up and configure simulators for different locales. Here are some solutions:</p><ul><li><p><strong>Locale configuration</strong>: Ensure that the simulator's locale matches the language of your test scripts. You can configure the locale using Maestro's command-line options:</p></li></ul></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>shell</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1e510">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1e510" class="language-shell">maestro start-device --platform ios --device-locale en_US --os-version=15</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<ul><li><p><strong>Use of IDs and universal locators</strong>: Instead of relying solely on text strings, use element IDs and universal locators that remain consistent across different languages.</p></li><li><p><strong>Parameterised tests</strong>: Create parameterised test scripts that can adapt to different locales by using variables for text strings, making it easier to run the same tests in multiple languages.</p></li></ul><p>By implementing these strategies, you can minimise discrepancies and ensure more reliable test results across various language settings with Maestro.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Emulator setup</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Setting up emulators for different locales and operating system versions can be a challenge with Maestro. For instance, initiating a test run on an iOS simulator in British English requires specific commands and configurations:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>text</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1e55b">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1e55b" class="language-text">maestro start-device --platform ios --device-locale en_GB --os-version=17</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>It’s important to note that Maestro supports specific OS versions, such as iOS 15, 16, and 17, at the time of writing. For the most current list of supported versions, refer to the <a href="https://cloud.mobile.dev/reference/device-configuration#simulator-specs" target="_blank">Maestro documentation</a>. Setting up the appropriate emulator can be tricky, especially for those not familiar with mobile testing environments.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Flaky tests and element inspection</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>A critical aspect of mobile testing is the accurate inspection of elements within the application. <a href="https://maestro.mobile.dev/getting-started/maestro-studio" target="_blank">Maestro Studio</a> is a tool within Maestro that helps testers design and execute tests through a visual interface. However, relying solely on Maestro Studio may not always allow for precise element inspection, leading to flaky tests if elements are not correctly identified.</p><p>Element IDs are unique identifiers for UI elements within an app, similar to how IDs are used in web applications. These IDs are crucial for ensuring tests interact with the correct elements. Without access to the app’s codebase, it can be challenging to inspect and identify these elements accurately.</p><p>To address this, tools like <a href="https://github.com/appium/appium-inspector" target="_blank">Appium Inspector </a>can be used alongside Maestro. Appium Inspector provides a comprehensive solution for inspecting app elements, allowing testers to accurately identify and interact with elements even without direct access to the codebase. A common practice is to use such inspectors to identify elements and reference the app’s codebase when possible to retrieve IDs.</p><p>By combining these tools, testers can ensure more reliable and accurate test results, reducing the likelihood of flaky tests.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/streamline-your-mobile-testing-with-maestro/2d576964dd-1725542994/blog-streamline-your-mobile-testing-with-maestro-maestro-studio-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>AI integration in Maestro</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>One of Maestro’s main features is its integration of artificial intelligence (AI), which assists users in various ways:</p><ul><li><p><strong>Automated element identification</strong>: The AI helps automatically identify UI elements, reducing the time needed to write test cases.</p></li><li><p><strong>Intelligent test recommendations</strong>: Based on the application’s structure and previous test cases, Maestro's AI can suggest test steps and improvements, ensuring more comprehensive test coverage.</p></li></ul><p>Thanks to these AI capabilities, Maestro not only reduces setup time but also enhances the reliability of tests. Teams can then focus more on the core functionalities of their applications, ensuring higher quality and a faster time-to-market. This integration of AI into the testing process is relatively new, making Maestro an innovative tool in the mobile testing landscape.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/streamline-your-mobile-testing-with-maestro/c76b40b46b-1725542994/blog-streamline-your-mobile-testing-with-maestro-maestro-studio-500x300-crop-q80.gif" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Parallel test execution</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>While Maestro simplifies many aspects of mobile testing, it does not support running tests in parallel across different simulators directly. Even with two simulators and two terminals open, parallel execution is not possible. There is, however, the option to run tests in the cloud within a mobile development environment, although this incurs additional costs.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Comparing mobile testing tools</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Choosing the right tool for mobile test automation can significantly impact efficiency, accessibility, and test maintenance. Here, we compare three main tools: Maestro, Robot Framework Appium Library, and WebdriverIO.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Maestro</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Maestro is designed for simplicity and ease of use, making it ideal for teams with limited coding experience. Its YAML-based syntax allows for quick test creation and execution, which is particularly useful for small projects or teams new to automation. The integration of AI in Maestro Studio provides intelligent test recommendations and automated element identification, enhancing productivity and test reliability.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Robot Framework Appium Library</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Robot Framework Appium Library leverages the power of Python and the flexibility of Robot Framework. It is suited for teams with a solid coding background who need a robust and versatile testing solution. This tool offers comprehensive support for various mobile testing scenarios but requires more setup and a steeper learning curve compared to Maestro.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>WebdriverIO</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>WebdriverIO is a JavaScript-based framework that supports both web and mobile testing. It is highly extensible and integrates well with various CI/CD pipelines, making it suitable for teams already familiar with JavaScript. WebdriverIO strikes a balance between usability and flexibility, offering powerful features for both beginner and advanced testers.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Feature comparison table</h3></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th>Feature</th>
                  <th>Maestro</th>
                  <th>Robot Framework Appium Library</th>
                  <th>WebdriverIO</th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td>Syntax</td>
                      <td>YAML</td>
                      <td>Python (Robot Framework)</td>
                      <td>JavaScript</td>
                  </tr>
              <tr>
                      <td>Platform support</td>
                      <td>Android, iOS, React Native, Flutter, Web Views, .NET MAUI</td>
                      <td>Android, iOS</td>
                      <td>Android, iOS, Web</td>
                  </tr>
              <tr>
                      <td>Setup complexity</td>
                      <td>Low</td>
                      <td>Medium</td>
                      <td>Medium</td>
                  </tr>
              <tr>
                      <td>Execution speed</td>
                      <td>Fast</td>
                      <td>Slow</td>
                      <td>Medium</td>
                  </tr>
              <tr>
                      <td>Accessibility</td>
                      <td>High (no coding required)</td>
                      <td>Medium</td>
                      <td>High (JavaScript friendly)</td>
                  </tr>
              <tr>
                      <td>Test management</td>
                      <td>Simple (YAML flows)</td>
                      <td>Complex</td>
                      <td>Medium</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>When to use each tool</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<ul><li><p><strong>Maestro</strong>: Recommended for teams seeking quick setup and ease of use, especially when team members have limited coding knowledge. Ideal for small to medium projects where speed and simplicity are crucial.</p></li><li><p><strong>Robot Framework Appium Library</strong>: Best for teams with strong coding skills who require a highly versatile and robust testing solution. Suitable for complex projects that demand extensive customisation.</p></li><li><p><strong>WebdriverIO</strong>: Great for teams with JavaScript expertise looking for a flexible tool that can handle both web and mobile testing. Perfect for projects that benefit from extensive integrations and a balance of ease of use and power.</p></li></ul><p>By understanding the strengths and appropriate use cases for each tool, teams can make informed decisions to enhance their mobile testing strategy.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Final considerations</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Based on our experience, we consider Maestro an exceptionally valuable tool for those new to mobile UI automation. Its simplicity in setup and use makes it particularly suitable for both small projects and those looking to gain a basic understanding of UI automation without facing a steep learning curve. However, for more complex projects, it might be necessary to opt for a more powerful and versatile tool.</p><p>Maestro serves as a precious resource for organisations aiming to improve the quality of their mobile applications through efficient and manageable testing processes. By reducing the complexity and time required to configure and execute tests, Maestro enables teams to focus more on innovation, reducing time to market and enhancing the quality of the final products.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[An introduction to cross-cloud access in managed Kubernetes clusters]]></title>
        <link>https://nearform.estd.dev/digital-community/an-introduction-to-cross-cloud-access-in-managed-kubernetes-clusters</link>
        <guid>https://nearform.estd.dev/@/page/a059b514-1e6b-4f6d-8bb7-681069bef66a</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 26 Jul 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/an-introduction-to-cross-cloud-access-in-managed-kubernetes-clusters/180c00399d-1725542994/option-c-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

The insights we gained from implementing cross-cloud access
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>In the ever-evolving landscape of cloud computing, it's increasingly common to face scenarios that necessitate cross-cloud access. Whether it’s accessing AWS services from an Azure Kubernetes Service (AKS) cluster or leveraging Azure resources from an AWS Elastic Kubernetes Service (EKS) cluster, these scenarios present unique challenges that demand innovative solutions.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">A project where we migrated to AKS: Background and challenges</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>In one of my recent projects, we encountered the need to migrate an application from a self-managed Kubernetes cluster hosted on AWS to Azure Kubernetes Services. This transition was not just a simple lift-and-shift, as it also involved enabling secure access to cloud resources across different cloud providers.</p><p>Official documentation often falls short in covering these complex scenarios comprehensively, leading to potential gaps in implementation. Through this blog post, I aim to share insights gained from implementing cross-cloud access, focusing on the integration between Azure and AWS. This discussion will cover two key scenarios:</p><ol><li><p>Enabling access to AWS services from an Azure AKS cluster.</p></li><li><p>Facilitating access to Azure services from an AWS EKS cluster.</p></li></ol><p>Our goal is to foster a harmonious interaction between these two cloud giants, overcoming the common obstacles encountered in such integrations and securely granting access without the need for sharing credentials.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Proposed solution: AWS IAM Role for Service Account (IRSA) and Azure workload identity federation</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Each cloud provider offers a bespoke solution to ensure secure access within their managed Kubernetes clusters. AWS employs the <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">IRSA</a>, allowing workloads to safely assume roles that provide necessary permissions to access cloud resources. Concurrently, Azure has rolled out <a href="https://azure.github.io/azure-workload-identity/docs/">Azure Workload Identity</a> to streamline identity and access management. Both platforms require a specific add-on to inject tokens correlated with the service account.</p><p>Central to these mechanisms is OpenID Connect (OIDC), a robust and time-tested protocol foundational to identity verification across disparate platforms. OIDC is pivotal for securely interconnecting Azure and AWS services, offering a secure, scalable, and adaptable authentication framework that is crucial to our approach, guaranteeing smooth and dependable identity management across cloud environments.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">OpenID Connect primer</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>OpenID Connect (OIDC) is a crucial protocol for verifying identities across various cloud platforms, enhancing the OAuth 2.0 framework by securely verifying a user's identity through an authorisation server and enabling access to resource servers. Although frequently used, the complexities of OAuth 2.0 and OIDC can initially be overwhelming, prompting a deeper exploration to understand how these protocols facilitate secure cross-cloud access.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Understanding OAuth 2.0 and OIDC</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In essence, accessing specific data or services requires authorisation to ensure the request is permitted and identity verification to confirm that the requestor is indeed who they claim to be. While OAuth 2.0 handles authorisation, it does not inherently confirm the requester's identity. OIDC addresses this gap by adding a layer of identity verification atop OAuth 2.0.</p><p><strong>Core components involved:</strong></p><ul><li><p><strong>User Data</strong>: The data or services needed, managed by a resource server.</p></li><li><p><strong>Resource Owner</strong>: The owner of the data or service.</p></li><li><p><strong>Authorisation Server</strong>: Authorises access requests.</p></li><li><p><strong>Relying Party/Client Application:</strong> Initiates access requests. Upon approval from the Resource Owner, the Authorisation Server issues an "Access Token."</p></li></ul><p>This framework ensures that data access is both authenticated and authorised.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>OAuth 2.0 Flows</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The OAuth 2.0 protocol includes various flows to accommodate different scenarios:</p><p><strong>1. Authorisation code flow</strong>: Used in systems like "Sign in with Google," where a user logs in to share specific data, receiving an access token in exchange for an authorisation code.</p><p><strong>2. Implicit flow:</strong> Previously popular in applications that couldn't securely store secrets, such as single-page applications. Now, due to security concerns, more secure methods like the Authorisation Code Flow with PKCE are recommended.</p><p><strong>3. Password credential flow</strong>: Employed when there is a strong trust relationship between the resource owner and the client, such as personal mobile apps.</p><p><strong>4. Client credential flow</strong>: Ideal for server-to-server interactions, where a client application uses its credentials to obtain an access token directly, granting access to the resource server.</p><p>In all these flows, OIDC enhances the process by adding an "ID Token" to verify the user's identity, providing a robust identity layer over the existing OAuth 2.0 authorisation structure. For further details, you can refer to the official <a href="https://openid.net/developers/specs/">OpenID Connect documentation</a>.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">OIDC in cloud Kubernetes environments</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>In this section, I share detailed authentication steps for AWS IRSA with Amazon EKS and Azure Workload Identity in AKS.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>AWS IRSA with Amazon EKS</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In AWS IRSA (IAM Roles for Service Accounts) within Amazon EKS, the OAuth flow used is akin to the <strong>Client Credentials Grant</strong>. This design is specifically tailored for machine-to-machine communication where an application acts on its own behalf rather than acting for a user. AWS IAM plays a critical role here, validating service account tokens issued by EKS and providing scoped AWS credentials for accessing AWS resources.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<h4><strong>Detailed authentication steps in AWS IRSA using an OIDC provider:</strong></h4></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p><strong>1. OIDC provider configuration</strong>:</p><ul><li><p>Within AWS EKS, an OIDC provider is configured to connect the Kubernetes cluster's identity system with AWS IAM. This OIDC provider is not a separate service but a configuration within AWS IAM that trusts the Kubernetes cluster's service account tokens.</p></li><li><p>The OIDC provider in AWS is set up to trust tokens issued by the Kubernetes cluster, meaning it recognises these tokens as authentic credentials.</p><p></p></li></ul><p><strong>2. Service account token</strong>:</p><ul><li><p>Each Kubernetes pod can be assigned a specific service account. This account has a JWT (JSON Web Token) associated with it, which is automatically managed and rotated by EKS.</p></li><li><p>This JWT includes claims that identify the particular service account and permissions.</p><p></p></li></ul><p><strong>3. Authentication process</strong>:</p><ul><li><p>When a pod in EKS needs to access AWS resources, it retrieves its JWT from the file system (mounted by EKS into the pod).</p></li><li><p>The pod or the application inside it then presents this JWT in a request to AWS IAM as part of the API call to assume the linked IAM role.</p></li></ul><p><strong>4. Token validation</strong>:</p><ul><li><p>AWS IAM, acting as the OIDC provider, validates the JWT against the issuer URL, the audience (<code>aud</code>), and other claims included in the JWT. These validations ensure that the token is indeed issued by the trusted EKS cluster and that it is intended for the specific IAM role it is requesting.</p></li><li><p>This validation process is crucial as it confirms the authenticity and appropriateness of the request, effectively authenticating the service account's identity based on the OIDC standards.</p></li></ul><p><strong>5. Credential issuance</strong>:</p><ul><li><p>Upon successful validation, AWS IAM issues AWS credentials (access key, secret key, and session token) to the pod. These credentials are temporary and scoped to the permissions defined in the IAM role associated with the service account.</p></li></ul><p><strong>6. Using AWS resources</strong>:</p><ul><li><p>With these credentials, the pod can make authenticated and authorised API calls to AWS services, operating within the permissions boundaries set by the IAM role.</p></li></ul><p>For more information, refer to the official <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">documentation</a>.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Azure Workload Identity in AKS</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Similarly, in Azure Kubernetes Service (AKS), the Azure Workload Identity utilises a flow comparable to the <strong>OAuth 2.0 Client Credentials Grant</strong>. This setup allows Kubernetes pods to securely access Azure resources by leveraging Azure Active Directory (AAD) identities. The AKS manages the association between Kubernetes service accounts and AAD identities, facilitating a secure and scalable authentication method without user interaction.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<h4><strong>Detailed authentication steps in Azure AD Workload Identity for Kubernetes:</strong></h4></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p><strong>1. OIDC provider configuration: </strong></p><p>Azure AD (Active Directory) Workload Identity uses Azure Active Directory (AAD) as the OIDC provider. This configuration involves setting up Kubernetes to issue tokens that Azure AD can trust. The integration between Kubernetes and Azure AD allows Kubernetes service accounts to use these tokens to authenticate against Azure resources securely.</p><p><strong>2. Service account token:</strong></p><p>In Azure Kubernetes Service (AKS), each pod can use a Kubernetes service account that is automatically integrated with Azure AD using the Azure Workload Identity setup. The tokens issued to these service accounts are projected into the pod's file system and are used for authentication with Azure AD.</p><p><strong>3. Authentication process:</strong></p><ul><li><p>A pod needing to access Azure resources will use the token provided to its Kubernetes service account.</p></li><li><p>This token is presented to Azure AD as part of the request to access Azure resources.</p></li></ul><p><strong>4. Token validation:</strong></p><ul><li><p>Azure AD validates the Kubernetes service account token using established trust configurations. It checks the issuer, the audience, and other claims in the token to ensure it is valid and issued by a trusted Kubernetes cluster.</p></li><li><p>This validation is crucial as it ensures the token's authenticity and appropriateness for the requested Azure resource access.</p></li></ul><p><strong>5. Credential issuance:</strong></p><ul><li><p>Once the token is validated, Azure AD issues an Azure access token to the pod.</p></li><li><p>This token is specifically scoped to the permissions that are assigned to the Azure AD identity linked with the Kubernetes service account.</p></li></ul><p><strong>6. Using Azure resources:</strong></p><ul><li><p>The pod uses the Azure access token to make authenticated and authorised API calls to Azure services.</p></li><li><p>These services validate the Azure access token and provide access based on the permissions configured for the Azure AD identity.</p></li></ul><p>For more information, refer to the official <a href="https://azure.github.io/azure-workload-identity/docs/">documentation</a>.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Running IRSA in AKS</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Following in the footsteps of IRSA in EKS, we simply need to substitute the EKS cluster with the AKS cluster. AKS hosts a public endpoint to expose the OpenID Configuration via the OIDC Issuer URL. Next, we need to create an IAM Identity Provider (IDP) that points to the AKS OIDC Issuer URL. This IDP is then associated with an IAM Role as a trust policy, along with a policy that grants the required permissions needed to access AWS services. Finally, we annotate the service account in the AKS cluster with the IAM Role ARN (Amazon Resource Name) that we created earlier.</p><p>In detail, we need to follow the below steps:</p><ol><li><p>In Azure: Create a resource group that hosts the AKS cluster.</p></li><li><p>In Azure: Create an AKS cluster and enable the OIDC Issuer.</p></li><li><p>In Azure: Create a service account (SA)</p></li><li><p>In Azure: Get the OIDC Issuer URL.</p></li><li><p>In AWS: Create a resource (an S3 bucket) to check the access.</p></li><li><p>In AWS: Create a role providing access to the S3 bucket created above.</p></li><li><p>In AWS: Create an identity provider that points to the AKS OIDC Issuer URL.</p></li><li><p>In Azure: Annotate the SA in AKS with the IAM Role ARN that we created above.</p></li><li><p>In Azure: Install the <code>amazon-eks-pod-identity</code> helm chart.</p></li><li><p>In Azure: Create a workload that uses the SA and try accessing the AWS Service in this case S3 bucket.</p></li></ol></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In the next section, we’ll now get our hands dirty. I have used <code>azure cli</code> in the various steps you’re about to read — make sure that you have authenticated to Azure using your credentials.</p><p>At the end of the next section, our solution will look like this</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/an-introduction-to-cross-cloud-access-in-managed-kubernetes-clusters/0c93f533b2-1725542994/az-aws-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an AKS cluster</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Run the below commands and note that we set some env variables which will build up over time and will be used later down the line.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1f9f1">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1f9f1" class="language-bash"># Set required environment variables.
export LOCATION=&quot;eastus&quot;
export RESOURCE_GROUP_NAME=&quot;rg-aks-az-identity&quot;
export CLUSTER_NAME=&quot;aks-az-identity&quot;
export SERVICE_ACCOUNT_NAMESPACE=&quot;default&quot;
export SERVICE_ACCOUNT_NAME=&quot;workload-identity-sa&quot;
export SUBSCRIPTIONID=$(az account show --query id -o tsv)
export AZURE_TENANT_ID=$(az account show -s ${SUBSCRIPTIONID} --query tenantId -otsv)

# Create resource group.
echo &quot;Creating a resource group: ${RESOURCE_GROUP_NAME}&quot;
az group create --name ${RESOURCE_GROUP_NAME} --location ${LOCATION}

# Create AKS cluster.
az aks create \
    --resource-group ${RESOURCE_GROUP_NAME}  \
    --name ${CLUSTER_NAME} \
    --network-plugin azure \
    --enable-managed-identity \
    --generate-ssh-keys \
    --node-count 1 \
    --enable-oidc-issuer \
    --outbound-type  loadBalancer 

# Output the OIDC issuer URL.
export SERVICE_ACCOUNT_ISSUER=$(az aks show --resource-group ${RESOURCE_GROUP_NAME} --name ${CLUSTER_NAME} --query &quot;oidcIssuerProfile.issuerUrl&quot; -otsv)

# Get the kubeconfig for the Kubernetes clusters.
az aks get-credentials --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP_NAME}

# Create a new Service account.
kubectl create sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an AWS resource for testing the access from AKS</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>First, you need to create an Amazon S3 bucket. Replace your bucket name with your desired bucket name and your region with the AWS region you want to use. Copy a file into the S3 bucket for testing later. Note that we created a dummy text file named <code>test-file.txt</code> for testing the access.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fa44">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fa44" class="language-bash">export BUCKET_NAME=irsaaccess
export REGION=us-east-1
aws s3api create-bucket --bucket ${BUCKET_NAME} --region ${REGION}

aws s3 cp test-file.txt s3://${BUCKET_NAME}/test-file.txt</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an AWS IAM Identity provider</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Create an Identity provider from the AWS console with the Azure OIDC Issuer URL from the above <code>SERVICEACCOUNTISSUER</code> and audience as `<a href="https://aws.amazon.com/iam/" target="_blank">sts.amazonaws.com</a>`</p><p>You need three pieces of information:</p><p>1. OIDC Issuer Url from AKS.</p><p>2. Audience which is <code>sts.amazonaws.com</code>.</p><p>3. CA Root cert thumbprint.</p><p>The first two pieces of info exist already, so now we need to get the CA Root Cert thumbprint.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fa89">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fa89" class="language-bash"># Extract the OIDC host from the issuer URL
export OIDC_ISSUER=$(echo $SERVICE_ACCOUNT_ISSUER | sed &#039;s|https://||&#039;)
export OIDC_HOST=$(echo $SERVICE_ACCOUNT_ISSUER | sed &#039;s|https://||&#039; | cut -d&#039;/&#039; -f1)

# Fetch the certificate chain from the OIDC host
echo | openssl s_client -connect $OIDC_HOST:443 -servername $OIDC_HOST -showcerts 2&gt;/dev/null | awk &#039;/-----BEGIN CERTIFICATE-----/{cert=$0 &quot;\n&quot;; next} /-----END CERTIFICATE-----/{cert=cert $0 &quot;\n&quot;; last_cert=cert; next} {cert=cert $0 &quot;\n&quot;} END{print last_cert &gt; &quot;last_cert.pem&quot;}&#039; 

# Calculate the SHA-1 fingerprint of the root CA certificate and format it how AWS Expects it.
CERT_THUMBPRINT=$(openssl x509 -in last_cert.pem -fingerprint -noout -sha1 | sed &#039;s/sha1 Fingerprint=//&#039; | tr -d &#039;:&#039;)

rm last_cert.pem

export IAM_IDP_ARN=$(aws iam create-open-id-connect-provider --url $SERVICE_ACCOUNT_ISSUER --client-id-list &quot;sts.amazonaws.com&quot; --thumbprint-list $CERT_THUMBPRINT | jq -r &#039;.OpenIDConnectProviderArn&#039;)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an IAM Policy to grant access to the S3 bucket</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Use the below policy JSON and save it to a file locally as <code>s3-policy.json</code> to consume later. Note the bucket name matches the name of the S3 bucket we created earlier.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>json</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fad3">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fad3" class="language-json">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [
                &quot;s3:GetObject&quot;,
                &quot;s3:ListBucket&quot;
            ],
            &quot;Resource&quot;: [
                &quot;arn:aws:s3:::irsaaccess&quot;,
                &quot;arn:aws:s3:::irsaaccess/*&quot;
            ]
        }
    ]
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Run the below command to create the custom policy. Please make sure that you configure <code>aws cli</code>.  </p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fb02">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fb02" class="language-bash">export POLICY_ARN=$(aws iam create-policy --policy-name S3ListReadPolicy --policy-document file://s3-policy.json | jq -r &#039;.Policy.Arn&#039;)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create a trust relationship policy document</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Use the templated JSON below to create a trust policy document and save it as <code>trust-policy-template.json</code>.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>json</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fb46">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fb46" class="language-json">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Principal&quot;: {
                &quot;Federated&quot;: &quot;$IAM_IDP_ARN&quot;
            },
            &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,
            &quot;Condition&quot;: {
                &quot;StringEquals&quot;: {
                    &quot;$OIDC_ISSUER:sub&quot;: &quot;sts.amazonaws.com&quot;
                },
                &quot;ForAnyValue:StringEquals&quot;: {
                    &quot;$OIDC_ISSUER:sub&quot;: [
                        &quot;system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME&quot;
                    ]
                }
            }
        }
    ]
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>We have a templated trust policy file named <code>trust-policy-template.json</code> and using the <code>envsubst</code> CLI replace the the template values from env variables which is ready for us to consume.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fb79">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fb79" class="language-bash">envsubst &lt; trust-policy-template.json &gt; trust-policy.json</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Now this has generated a new file <code>trust-policy.json</code> with the required configuration.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an IAM Role by attaching a trust policy</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>You create the role using the AWS CLI by specifying the role name and the trust relationship policy document. Save your trust policy JSON to a file, for instance, trust-policy.json, and then run the below AWS CLI command</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fbd0">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fbd0" class="language-bash">export WEB_IDENTITY_ROLE=$(aws iam create-role --role-name MyWebIdentityRole --assume-role-policy-document file://trust-policy.json | jq -r &#039;.Role.Arn&#039;)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Once the role is created, you can attach permission policies to define what resources and actions the role can access. If you already have a policy ARN to attach, you can use:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fbff">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fbff" class="language-bash">aws iam attach-role-policy --role-name MyWebIdentityRole --policy-arn $POLICY_ARN</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Annotate the Service account for IRSA</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fc32">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fc32" class="language-bash">kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} eks.amazonaws.com/role-arn=&quot;${WEB_IDENTITY_ROLE}&quot; --overwrite
kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} eks.amazonaws.com/audience=&quot;sts.amazonaws.com&quot; --overwrite
kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} eks.amazonaws.com/sts-regional-endpoints=&quot;true&quot; --overwrite
kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} eks.amazonaws.com/token-expiration=&quot;86400&quot; --overwrite</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Deploy the pod identity webhook</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>This webhook is for mutating pods that will require AWS IAM access and it can be installed using the helm chart below:</p><p><code>https://artifacthub.io/packages/helm/jkroepke/amazon-eks-pod-identity-webhook</code></p><p>Note that <code>cert-manager</code> is a pre-requisite for this add-on and you can install this from the official documentation <a href="https://cert-manager.io/docs/installation/">https://cert-manager.io/docs/installation/</a>.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fc77">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fc77" class="language-bash">kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml

helm repo add jkroepke https://jkroepke.github.io/helm-charts/
helm install amazon-eks-pod-identity-webhook jkroepke/amazon-eks-pod-identity-webhook</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Run a workload in the AKS that needs S3 bucket access</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Note that we are deploying a pod using the service account created earlier in this setup. This pod utilises the Amazon <code>aws-cli</code> image to execute some CLI commands and test the access.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fcc1">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fcc1" class="language-bash">cat &lt;&lt;EOF | kubectl apply -f - 
apiVersion: v1
kind: Pod
metadata:
  name: awscli
  namespace: ${SERVICE_ACCOUNT_NAMESPACE}
  labels:
    app: awscli
spec:
  serviceAccountName: ${SERVICE_ACCOUNT_NAME}
  containers:
  - image: amazon/aws-cli
    command:
      - &quot;sleep&quot;
      - &quot;604800&quot;
    imagePullPolicy: IfNotPresent
    name: awscli
  restartPolicy: Always
EOF</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>After deployment, proceed to test the access with the following commands:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fcf3">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fcf3" class="language-bash">kubectl exec -it awscli -n ${SERVICE_ACCOUNT_NAMESPACE} -- aws sts get-caller-identity

kubectl exec -it awscli -n ${SERVICE_ACCOUNT_NAMESPACE} -- aws s3 ls s3://irsaaccess</code></pre>
</figure>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Running the Azure Workload Identity in an AWS EKS cluster</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The following sections cover the steps required for you to run the Azure Workload Identity in an AWS EKS cluster, from creating the AWS EKS cluster to validating access.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create the AWS EKS cluster</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>I am using <code>eksctl</code> CLI for provisioning the cluster and please setup your <code>aws cli</code> with the required credentials. In my case, I am loading a <code>default</code> profile on my machine.</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fd3f">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fd3f" class="language-bash">export AWS_DEFAULT_PROFILE=default

# Define variables
export CLUSTER_NAME=&quot;my-eks-cluster&quot;
export REGION=&quot;us-west-2&quot;
export NODE_TYPE=&quot;t3.medium&quot;
export NODE_COUNT=1
export KUBERNETES_VERSION=&quot;1.28&quot;

# Create the EKS cluster with OIDC identity provider
echo &quot;Step 1: Creating EKS Cluster: $CLUSTER_NAME with OIDC identity provider&quot;
eksctl create cluster \
  --name $CLUSTER_NAME \
  --version $KUBERNETES_VERSION \
  --region $REGION \
  --nodegroup-name &quot;standard-workers&quot; \
  --node-type $NODE_TYPE \
  --nodes $NODE_COUNT \
  --nodes-min $NODE_COUNT \
  --nodes-max $NODE_COUNT \
  --managed \
  --with-oidc

# Check if the cluster was created successfully
if [ $? -eq 0 ]; then
    echo &quot;EKS Cluster $CLUSTER_NAME created successfully.&quot;
else
    echo &quot;Failed to create EKS Cluster $CLUSTER_NAME.&quot;
fi
export OIDC_URL=$(aws eks describe-cluster --name $CLUSTER_NAME --query &quot;cluster.identity.oidc.issuer&quot; --output text)

export AZURE_SUBSCRIPTION_ID=$(az account show --query id -o tsv)

export AZURE_TENANT_ID=$(az account show -s ${AZURE_SUBSCRIPTION_ID} --query tenantId -otsv)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Install Azure Workload Identity</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fd63">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fd63" class="language-bash">helm repo add azure-workload-identity https://azure.github.io/azure-workload-identity/charts
helm repo update
helm install workload-identity-webhook azure-workload-identity/workload-identity-webhook \
   --namespace azure-workload-identity-system \
   --create-namespace \
   --set azureTenantID=&quot;${AZURE_TENANT_ID}&quot;</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create an Azure KeyVault for testing</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fd82">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fd82" class="language-bash">export RESOURCE_GROUP_NAME=&quot;rg-aws-access-az&quot;
export LOCATION=&quot;eastus&quot;
export KEYVAULT_NAME=&quot;kv-identity-azwi1&quot;
export KEYVAULT_SECRET_NAME=&quot;secret&quot;

# Create keyvault and secret
az keyvault create --resource-group ${RESOURCE_GROUP_NAME} \
   --location ${LOCATION} \
   --name ${KEYVAULT_NAME}

az keyvault wait --name ${KEYVAULT_NAME} --created

az keyvault secret set --vault-name ${KEYVAULT_NAME} \
   --name ${KEYVAULT_SECRET_NAME} \
   --value &quot;Test&quot;
   
export KEYVAULT_URL=$(az keyvault show -g &quot;rg-aks-az-identity&quot; -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create a Managed Identity to federate access to a KeyVault</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fda2">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fda2" class="language-bash">export IDENTITY_NAME=&quot;aws-access-az&quot;

az group create --name ${RESOURCE_GROUP_NAME} --location ${LOCATION}

az identity create --name ${IDENTITY_NAME} --resource-group ${RESOURCE_GROUP_NAME} --query principalId -o tsv

export USER_ASSIGNED_IDENTITY_CLIENT_ID=&quot;$(az identity show --name &quot;${IDENTITY_NAME}&quot; --resource-group &quot;${RESOURCE_GROUP_NAME}&quot; --query &#039;clientId&#039; -otsv)&quot;
export USER_ASSIGNED_IDENTITY_OBJECT_ID=&quot;$(az identity show --name &quot;${IDENTITY_NAME}&quot; --resource-group &quot;${RESOURCE_GROUP_NAME}&quot; --query &#039;principalId&#039; -otsv)&quot;


az keyvault set-policy --name ${KEYVAULT_NAME} \
  --secret-permissions get \
  --object-id ${USER_ASSIGNED_IDENTITY_OBJECT_ID} \
  --resource-group ${RESOURCE_GROUP_NAME}  \
  --subscription ${AZURE_SUBSCRIPTION_ID}

az identity federated-credential create \
    --name &quot;kubernetes-federated-credential&quot; \
    --identity-name ${IDENTITY_NAME} \
    --resource-group ${RESOURCE_GROUP_NAME} \
    --issuer ${OIDC_URL} \
    --subject &quot;system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}&quot;</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Create a service account in EKS and annotate</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fdcb">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fdcb" class="language-bash">export SERVICE_ACCOUNT_NAMESPACE=&quot;azworkload&quot;
export SERVICE_ACCOUNT_NAME=&quot;azworkload&quot;
kubectl create ns ${SERVICE_ACCOUNT_NAMESPACE}
kubectl create sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE}

kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/tenant-id=&quot;${AZURE_TENANT_ID}&quot; --overwrite
kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/client-id=&quot;${USER_ASSIGNED_IDENTITY_CLIENT_ID}&quot; --overwrite</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Deploy the workload in EKS</h3></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>bash</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b1fdeb">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b1fdeb" class="language-bash">cat &lt;&lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: kv-read
  namespace: ${SERVICE_ACCOUNT_NAMESPACE}
  labels:
    azure.workload.identity/use: &quot;true&quot;
spec:
  serviceAccountName: ${SERVICE_ACCOUNT_NAME}
  containers:
    - image: ghcr.io/azure/azure-workload-identity/msal-go
      name: oidc
      env:
      - name: KEYVAULT_URL
        value: ${KEYVAULT_URL}
      - name: SECRET_NAME
        value: ${KEYVAULT_SECRET_NAME}
  nodeSelector:
    kubernetes.io/os: linux
EOF</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Validate access</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>To check that we are able to access the secret from the KeyVault, check the logs of the pod <code>kv-read</code> from above which would display the secret value. Note that the image used here uses the Microsoft Authentication Library (MSAL) go library to make use of the Workload Identity to access the KeyVault.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Conclusion</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>OIDC plays a crucial role in verifying identities across various cloud platforms, simplifying the complex process of integrating identity verification into OAuth 2.0. It allows for seamless, secure communication between cloud providers by ensuring that identities are authenticated properly before granting access to resources. This is particularly beneficial in scenarios where services from multiple cloud providers, like AWS and Azure, need to interoperate securely without sharing credentials directly.</p><p>Azure has abstracted much of the complexity involved in setting up the OIDC provider, offering a more user-friendly experience compared to AWS. Azure's Workload Identity integration with Azure Active Directory (AAD) provides a streamlined process where the identity management is tightly coupled with the Azure ecosystem, reducing manual configurations and potential errors.</p><p>AWS, while more complex in its setup, offers robust and flexible configurations through its IAM Roles for Service Accounts (IRSA) mechanism. This approach, although requiring a deeper understanding and more detailed configuration, allows for finely-tuned access control and extensive customization options, catering to advanced use cases and specific security requirements.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">References</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<ul><li><p><a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html" target="_blank">Obtain the thumbprint for an OpenID Connect identity provider</a></p></li><li><p><a href="https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer" target="_blank">Create an OpenID Connect provider on Azure Kubernetes Service (AKS)</a></p></li><li><p><a href="https://www.rfc-editor.org/rfc/rfc5246#section-7.4.2" target="_blank">Server Certificate</a></p></li><li><p><a href="https://aws.amazon.com/blogs/containers/diving-into-iam-roles-for-service-accounts/" target="_blank">Diving into IAM Roles for Service Accounts</a></p></li></ul></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Resume data replication in Postgres and Node.js]]></title>
        <link>https://nearform.estd.dev/digital-community/resume-data-replication-in-postgres-and-node-js</link>
        <guid>https://nearform.estd.dev/@/page/0e819ad2-1f8f-4e4a-ad00-be5b5278e4a6</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate>
            <author>
            Manuel Spigolon            </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/resume-data-replication-in-postgres-and-node-js/4f8c02955b-1725542994/blog-resume-data-replication-in-postgres-and-nodejs-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

How to resume replication from the point where the Node.js application was stopped
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>This article is a continuation of <u><a href="https://www.nearform.com/digital-community/real-time-data-replication-in-postgres-and-node-js/" target="_blank">Real-time data Replication in Postgres and Node.js</a></u>. Before reading this article, I recommend you read the previous one because it provides essential context to the points I cover.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In our previous article, we discussed how to replicate data from a Postgres database to a Node.js application in real-time using logical replication. However, if the Node.js application crashes or stops for some reason, the replication will cease, and we risk losing the data that our system produces in the meantime via another microservice or application.</p><p>In this article, I discuss how to resume replication from the last point where the Node.js application stopped by using a persistent replication slot in the Postgres database. This ensures that our application doesn't lose events produced by other microservices or applications during downtime.</p><p><strong>Editor’s note:</strong> This is a cross-post written by Senior Software Developer, Manuel Spigolon. Manuel has his own blog at <u><a href="https://backend.cafe/" target="_blank">backend.cafe</a></u> where you can subscribe for updates and find more great posts. Some of the links in the article point to Manuel’s personal GitHub account.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Creating a replication slot</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>To resume replication, we need to create a replication slot in the Postgres database. A replication slot is a logical entity that keeps track of changes happening in the database and sends them to the subscriber. The <code>postgres</code> package we used in the previous article automatically created a replication slot for us, but it was not persistent, it was a <a href="https://www.postgresql.org/docs/16/view-pg-replication-slots.html" target="_blank">`TEMPORARY`</a> replication slot that was removed when the subscriber disconnected.</p><p>Since we want to resume replication from the last point where the Node.js application was stopped, we need to create a persistent replication slot. Let's create one in a new <code>setup-replication.js</code> file:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b202b2">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b202b2" class="language-js">import pg from &#039;pg&#039;
const { Client } = pg

const client = new Client({ user: &#039;postgres&#039;, password: &#039;foopsw&#039; })
await client.connect()

await createReplicationSlotIfNotExists(&#039;foo_slot&#039;)

await client.end()

async function createReplicationSlotIfNotExists (slotName) {
  const slots = await client.query(&#039;SELECT * FROM pg_replication_slots WHERE slot_name = $1&#039;, [slotName])

  if (!slots.rows.length) {
    const newSlot = await client.query(&quot;SELECT * FROM pg_create_logical_replication_slot($1, &#039;pgoutput&#039;)&quot;, [slotName])
    console.log(&#039;Created replication slot&#039;, newSlot.rows[0])
  } else {
    console.log(&#039;Slot already exists&#039;, slots.rows[0])
  }
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>We are using the <a href="https://www.postgresql.org/docs/16/view-pg-replication-slots.html" target="_blank">`pgreplicationslots`</a> view to check whether a replication slot with the given name already exists. If it doesn't exist then we create a new replication slot using the `<a href="https://www.postgresql.org/docs/16/functions-admin.html#FUNCTIONS-REPLICATION" target="_blank">pgcreatelogicalreplicationslot</a>` function.</p><p>Note that we specified the <a href="https://www.postgresql.org/docs/16/protocol-logical-replication.html" target="_blank">`pgoutput` plugin</a> in the function to decode the changes in the replication slot. This is the default plugin for logical replication, and it ships with Postgres. Be aware that there are other plugins, such as:</p><ul><li><p><a href="https://www.postgresql.org/docs/16/test-decoding.html" target="_blank">`test_decoding` plugin</a> is the simplest plugin that ships with Postgres to start building your own custom plugin.</p></li><li><p><a href="https://packages.ubuntu.com/noble/postgresql-16-wal2json" target="_blank">`wal2json`</a>, which must be installed separately in the Postgres database, allowing you to use them in the <code>pgcreatelogicalreplicationslot</code> function.</p></li></ul><p>Note that each plugin has its own advantages and disadvantages, so choose the one that best fits your use case. The biggest difference if you try to use <code>test_decoding</code> versus <code>pgoutput</code> is that the former does not accept a publication name as a parameter while the <u><a href="https://github.com/postgres/postgres/blob/3c469a939cf1cc95b136653e7c6e27e472dc0472/src/backend/replication/pgoutput/pgoutput.c#L449-L452" target="_blank">latter does</a></u>. This means that you can use <code>pgoutput</code> to filter the changes you want to replicate, while <code>test_decoding</code> will replicate all changes in the database without filtering them!</p><p>Now, run the <code>setup-replication.js</code> file to create a replication slot!</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Configuring the consumer</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>In the previous article, we already created the <code>setup-consumer.js</code> that creates the publications that our application is interested in. So, we can reuse the same file and just run it — if we haven't already.</p><p>As a reminder: you will first need to start the Postgres server and create the <code>foo</code> database.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Resuming the replication</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>We are ready to create a new <code>consumer-resume.js</code> file that will resume replication from the last point where the Node.js application was stopped, so let's jump into it:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b20301">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b20301" class="language-js">import { LogicalReplicationService, PgoutputPlugin } from &#039;pg-logical-replication&#039;

const client = new LogicalReplicationService({
  user: &#039;postgres&#039;,
  password: &#039;foopsw&#039;
}, { acknowledge: { auto: false } })

client.on(&#039;data&#039;, async (lsn, log) =&gt; {
  if (log.tag === &#039;insert&#039;) {
    console.log(`${lsn}) Received insert: ${log.relation.schema}.${log.relation.name} ${log.new.id}`)
  } else if (log.relation) {
    console.log(`${lsn}) Received log: ${log.relation.schema}.${log.relation.name} ${log.tag}`)
  }

  await client.acknowledge(lsn)
})

const eventDecoder = new PgoutputPlugin({
  // Get a complete list of available options at:
  // https://www.postgresql.org/docs/16/protocol-logical-replication.html
  protoVersion: 4,
  binary: true,
  publicationNames: [
    &#039;foo_odd&#039;,
    &#039;foo_update_only&#039;
  ]
})

console.log(&#039;Listening for changes...&#039;)
process.on(&#039;SIGINT&#039;, async () =&gt; {
  console.log(&#039;Stopping client...&#039;)
  await client.stop()
})

await client.subscribe(eventDecoder, &#039;foo_slot&#039;)</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>This time, we are using the <a href="https://www.npmjs.com/package/pg-logical-replication" target="_blank">`pg-logical-replication package`</a> to demonstrate the resuming of replication. Its low-level API provides us more control over the replication process, otherwise we would not be able to configure the Plugin to receive only the changes we are interested in.</p><p>The code can be explained as follows:</p><ul><li><p>1: We are creating a new <code>LogicalReplicationService</code> instance and passing the connection options to it. Note that we are setting the <code>acknowledge.auto</code> option to <code>false</code> to manually acknowledge the changes; otherwise, they would be automatically acknowledged. By setting this option to <code>false</code>, we gain even more control over the process.</p></li><li><p>2: We are listening to the <code>data</code> event to receive changes from the replication slot.</p><ul><li><p>At this point, you should process the <code>log</code> and apply your business logic. In this case, we're just logging the changes to the console.</p></li></ul><ul><li><p>After processing the changes, you must acknowledge them using the <code>acknowledge</code> method; otherwise, the slot will not advance. The <code>lsn</code> (<strong>Log Sequence Number</strong>) is the unique identifier for each change in the database and is used to track changes in the replication slot.</p></li></ul></li><li><p>3: We are creating a new <code>PgoutputPlugin</code> instance and passing it to the <code>subscribe</code> method to establish a connection with the replication slot.</p></li></ul><p>To start the application, run the <code>node consumer-resume.js</code> file, and it will begin receiving changes from the replication slot. If we did all the steps correctly, you can start the <code>node producer.js</code> file that we wrote in the previous article to produce changes in the database and see the changes in the consumer application.</p><p>If you stop the consumer application by pressing <code>Ctrl+C</code>, the replication will stop, and the slot will not move forward. However, if you start the <code>consumer-resume.js</code> application again, it will resume replication from the last point where it was stopped! 🎉</p><p>Moreover, we can see that the output shows only the changes from the <code>foo_odd</code> and <code>fooupdateonly</code> publications, which we configured in the <code>PgoutputPlugin</code> instance so we will see updates and inserts with odd <code>id</code> numbers only:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>text</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b2034c">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b2034c" class="language-text">0/15648E0) Received insert: public.foo 18
0/15649A0) Received log: public.foo update
0/1564AF0) Received log: public.foo update
0/1564B80) Received insert: public.foo 20
0/1564C40) Received log: public.foo update
0/1564D90) Received log: public.foo update
0/1564E20) Received insert: public.foo 22
0/1564EE0) Received log: public.foo update
0/1565030) Received log: public.foo update
0/15650C0) Received insert: public.foo 24</code></pre>
</figure>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Conclusion</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>In this article, we discussed how to resume replication from the last point where the Node.js application was stopped.</p><p>We created a persistent replication slot in the Postgres database and used the <code>pg-logical-replication</code> package to demonstrate resuming replication. This ensures that our application doesn't lose data produced by other microservices or applications during downtime.</p><p>In doing so, we did not change the <code>producer.js</code> file, which means that the producer can continue to produce changes in the database without any issues and the previous Publications setup is still valid: we just configured manually the replication slot and the new consumer.</p><p>Remember, the replication slot retains changes in the database until the slot is dropped or the changes are acknowledged by the subscriber. If not managed properly, this can lead to high disk usage because Postgres will keep the changes in the WAL logs indefinitely instead of removing them.</p><p>I hope you enjoyed this article and learned something new! Does it deserve a comment and a share? 🚀</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Prevent, respond, recover: Boosting digital resilience in financial services by moving to the cloud]]></title>
        <link>https://nearform.estd.dev/insights/prevent-respond-recover-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud</link>
        <guid>https://nearform.estd.dev/@/page/2288da2d-67b0-4bdf-adfb-4437afc79b06</guid>

      
                    <category>Insights</category>
              
    

        <pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/insights/prevent-respond-recover-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud/53fd468c08-1722602876/blog-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud-img-500x300-crop-q80.jpg" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

Though caution is understandable, there is some urgency for leaders to embrace cloud services 
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>Financial services organisations are reluctant to embrace cloud technology — “you’ve got to go slowly and you’ve got to go cautiously”, as <u><a href="https://www.nytimes.com/2022/01/03/business/wall-street-cloud-computing.html" target="_blank">David M. Solomon</a><a href="https://www.nytimes.com/2022/01/03/business/wall-street-cloud-computing.html">, </a><a href="https://www.nytimes.com/2022/01/03/business/wall-street-cloud-computing.html" target="_blank">the chief executive of Goldman Sachs</a></u><a href="https://www.nytimes.com/2022/01/03/business/wall-street-cloud-computing.html" target="_blank"> </a>notes.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>True, to a point, but in reality, it’s not that these companies aren’t integrating cloud services, or that they’re dragging their feet in doing so. They’re just not <em>fully </em>embracing them — yet. </p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Nearform expert insight:&nbsp;</strong></td>
                      <td>“We help organisations build environments in the cloud and leverage cloud services to release applications easier and faster, while shifting their cost, security and ownership models for greater scalability and agility.”<br><br>Keith Madsen, Technical Director at&nbsp;Nearform</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Though caution is understandable, there is some urgency for leaders to make the switch. <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/three-big-moves-that-can-decide-a-financial-institutions-future-in-the-cloud" target="_blank">McKinsey reports</a> that “Fortune 500 financial institutions alone could generate as much as $60 billion to $80 billion in run-rate EBITDA in 2030 by making the most of the cost-optimization levers and business use cases unlocked by cloud.”</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Expert insight:</strong></td>
                      <td>“I think what struck me most was that nearly every bank has taken off on its journey to the cloud, but very few have gotten more than a few feet off the ground.”<br><br><a href="https://biztechmagazine.com/article/2022/05/why-banks-have-been-slow-embrace-cloud-and-what-could-soon-change-perfcon" target="_blank">Mike Abbott</a>, Global Banking Lead, Accenture</td>
                  </tr>
              <tr>
                      <td><strong>By the numbers:&nbsp;</strong></td>
                      <td>- <a href="https://www.capgemini.com/news/press-releases/91-of-banks-and-insurers-have-initiated-their-cloud-journey-yet-many-are-unable-to-realize-full-business-value/" target="_blank">Only 40% of banks</a> and less than half of insurers fully achieved their expected outcomes from migrating to cloud<br>- More than 50% of firms have only moved a minimal portion of their core business applications to the cloud<br>- Although banks <a href="https://www.forbes.com/sites/davidparker/2023/09/06/why-financial-services-firms-are-struggling-to-succeed-with-cloud-computing/?sh=2f7c9bec76f2" target="_blank">almost doubled</a> their reliance on the cloud between 2021 and 2022, this still amounted to an average of only 15% of their total workloads<br>- Cloud and edge computing are the top technologies being considered by financial institutions, with <a href="https://fintech.global/2024/03/09/financial-institutions-are-shifting-their-workload-to-the-cloud-in-2024/" target="_blank">84% of executives</a> recognising their relevance<br>- 89% of financial services executives believe that a <a href="https://www.capgemini.com/news/press-releases/91-of-banks-and-insurers-have-initiated-their-cloud-journey-yet-many-are-unable-to-realize-full-business-value/" target="_blank">cloud-enabled platform</a> is crucial for delivering the agility, flexibility, innovation, and productivity necessary to meet escalating business demands</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Financial services executives recognise that in order to compete and stand out in an increasingly crowded marketplace, the speed, power, and flexibility afforded by cloud computing is critical. Most financial services companies are indeed exploring how moving to the cloud can benefit their businesses, but many are still only scratching the surface of what’s possible. </p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/insights/prevent-respond-recover-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud/f4909893a6-1722602876/blog-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud-gartner-chart-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>Graphic: GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.</p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Considering investments in legacy systems and concerns about regulatory compliance and data security, hesitancy to go all-in on the cloud has been understandable. However, changing regulations and the  obligation for financial services organisations to have the resilience to identify potential disruptions and minimise their impact highlights the importance of modernising with cloud-native systems.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Defining (and regulating) digital resilience</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Threats to business continuity encompass human-made attacks such as cybersecurity breaches, unpredictable events like power outages or natural disasters and potentially preventable issues including hardware or software failure. <a href="https://www.splunk.com/en_us/blog/learn/digital-resilience.html" target="_blank">Digitally resilient organisations</a> analyse available information to anticipate potential disruptions, and have the resources to minimise and recover from the impact of disruptions that do occur. For financial services organisations in particular, disruptions can mean not only lost revenue for the affected company, but also losses for customers and negative effects to the larger economy. For this reason, digital resilience is even more necessary. </p><p>This necessity extends beyond simply responsible business practices - new regulations, including the <u><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2554&amp;from=FR" target="_blank">Digital Operational Resilience Act (DORA)</a></u> in the European Union, require financial services companies to demonstrate that they have created thorough incident response plans and infrastructure in place to prepare for and mitigate interruptions of service. According to the Act, responsibility to comply lies not just with financial companies as a whole, but with individual officers of the company. The text states “Under DORA, the Board of Directors is personally liable for <a href="https://www.capgemini.com/insights/expert-perspectives/a-digital-edge-for-financial-services-navigating-cybersecurity-in-the-era-of-the-digital-operational-resilience-act-dora/#:~:text=DORA%20outlines%20specific%20measures%20that,event%20of%20a%20cyber%20disruption" target="_blank">cybersecurity governance</a> and risk management, including all aspects such as reporting, testing and other necessary measures.” </p><p>In the United States, the Federal Reserve Board, Office of the Comptroller of the Currency, and the Federal Deposit Insurance Corporation worked together to develop and issue the “<u><a href="https://www.federalreserve.gov/supervisionreg/srletters/SR2024.htm">Sound Practices to Strengthen Operational Resilience</a></u>” guidance. This guidance explicitly defines practices that large banks should undertake to prepare for and address the operational risk of cyberattacks, natural disasters, and pandemics. A <u><a href="https://www.fca.org.uk/publications/policy-statements/ps21-3-building-operational-resilience#:~:text=By%2031%20March%202022%2C%20firms,vulnerabilities%20in%20their%20operational%20resilience." target="_blank">similar group</a></u> in the UK, comprised of the UK's supervisory authorities, the Prudential Regulation Authority (PRA), Financial Conduct Authority (FCA) and Bank of England (BoE) announced resiliency requirements for UK banks and insurers. The purpose of all of these coalitions is to ensure that the financial institutions operating in their countries are taking appropriate and adequate steps to protect consumers, the overall financial sector, and country economies. </p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Digital resilience can limit losses from disruptive events</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The prospect of complying with regulations can sound daunting, but in the case of regulations instituted to promote digital resilience, there are actual economic benefits. Key requirements of DORA and other operational resilience guidance documents are incident-response plans, business continuity plans, and regular risk assessments. Regulations aside, all of these assets are important for companies to have in place and up-to-date in order to limit financial losses in the event of a disruption. Fees for noncompliance with regulations can be steep, but they would be minimal in comparison to the damage done by a major event that an organisation is unprepared for.</p><p>It makes financial sense for companies to invest in resilience, and many are in the process of doing so. Studies show that executives in financial services organisations are decreasing spending on legacy systems, and increasing it on technologies that boost productivity and promote resilience such as cloud services. In 2022, 53% of banking tech executives surveyed expressed that they planned to increase investment in cloud platforms by the largest amount compared to other tech spending**. McKinsey reports that by 2030, value drivers could enable cloud services to deliver more than $3 trillion in EBITDA value, <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/projecting-the-global-value-of-cloud-3-trillion-is-up-for-grabs-for-companies-that-go-beyond-adoption" target="_blank">$407 billion</a> of it in IT resilience improvement, across the Forbes Global 2000.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/insights/prevent-respond-recover-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud/c660633f10-1722602876/blog-boosting-digital-resilience-in-financial-services-by-moving-to-the-cloud-mck-graph-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>**Data source: Tool: Cloud Computing Use Cases for Banking and Investment Services, 2023, 21 June 2023 - ID G00796050 GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">How the cloud enables prevention, response and recovery</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Cloud services are a key part of building resilience due to their ability to help organisations prevent, respond, and recover from disruption. They’re built to be reliable, with redundant systems, backup mechanisms and disaster recovery plans in place. These help minimise the risk of service interruptions and ensure applications and data are accessible even in the event of hardware failures or other disruptions. </p><p>Additionally, cloud service providers handle the technical processes of database management and security. They also monitor and analyse their systems, which creates the potential to identify and address potential service delivery issues before they escalate, helping to maintain high levels of service availability and performance. Managed service plans enable organisations to contract these operational tasks to a third-party vendor, and focus their resources on application development and improvement.</p><p>When it becomes necessary to respond to an event, Cloud Service Provider (CSP) flexibility enables quick adjustments to resources and configurations in response to service delivery issues such as performance bottlenecks or hardware failures. </p><p>In the recovery phase, cloud-based systems are able to recover from issues faster than traditional on-premises infrastructure. CSPs handle underlying infrastructure management, allowing organisations to focus on restoring service rather than troubleshooting hardware or software issues. Additionally, automated backups and data replication enables companies to quickly recover their data and applications if a truly disastrous event wipes out the existing infrastructure.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Embracing the cloud</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The reasons holding many financial services companies back from migrating more of their business to the cloud are fading away. But the complexity of modernising legacy systems or integrating them with cloud-based solutions requires an experienced partner who truly understands an organisation’s business and technology needs.</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Nearform client insight:</strong> </td>
                      <td>“With Nearform, we found a partner who could help us explore ‘the art of the possible’. They understood straight away what we were trying to do.”&nbsp;<br><br>Carlo Marcoli, API Economy Solutions Leader – Europe, IBM</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In its work with IBM, Nearform leveraged its cloud expertise to develop a leading-edge, open banking app that enables a complex, real-world customer journey.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Nearform’s track record of boosting digital resilience</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>When diagnosing how to help a client’s business be more resilient, Nearform uses a proprietary method  to get an overview of a business’ structure and operations. During the first phase, the goal is to get a holistic understanding of system issues and where problems may lay, and objectively assess the client’s technology, resources, and processes in place.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<p>Nearform case study: Building resilience and boosting observability for a global organisation</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Issue: </strong></td>
                      <td>Excessive service outages were preventing customers from accessing their accounts, making payments and more. The client organisation needed to rework how it monitors, prevents and responds to incidents to limit downtime and increase their resiliency.</td>
                  </tr>
              <tr>
                      <td><strong>Solutions: </strong></td>
                      <td>Nearform experts developed a roadmap to improve the client’s Site Reliability Engineering (SRE) practices, and collaborated with company engineers to streamline monitoring and incident reporting. Specific improvements included:<br>- Standardisation/simplification of onboarding for digital services<br>- Automated alerts<br>- Custom dashboards showing site metrics and data</td>
                  </tr>
              <tr>
                      <td><strong>Impact:</strong>&nbsp;</td>
                      <td>- Updated process, from concept ideation to dashboard implementation, completed within 6 weeks&nbsp;<br>- Site improvements resulted in zero downtime during the first product launch&nbsp;<br>- Average time to recovery reduced by 93%</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In this case study, Nearform identified core issues with monitoring and alerting, incident management and observability. The diagnostic phase revealed an over-reliance on manual operations, and an inability to effectively prioritise tasks, making the team reactive instead of proactive. These insights led the team to the solutions, and provided a clear path to improve the company’s resiliency.</p><p>There’s no denying that cloud migration is a complicated and challenging process. With an experienced partner to help identify the most effective path forward and design a secure, resilient customised digital solution, financial services companies can fully embrace the power and versatility of the cloud. </p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[How AI can actually accelerate compliance efforts in financial services]]></title>
        <link>https://nearform.estd.dev/insights/how-ai-can-actually-accelerate-compliance-efforts-in-financial-services</link>
        <guid>https://nearform.estd.dev/@/page/581384d5-0d63-45e4-a311-906c6687e4a2</guid>

      
                    <category>Insights</category>
              
    

        <pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/insights/how-ai-can-actually-accelerate-compliance-efforts-in-financial-services/b258260654-1722602875/blog-how-ai-can-actually-accelerate-compliance-efforts-in-financial-services-pic-500x300-crop-q80.jpg" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

Artificial Intelligence can empower financial services companies to more effectively comply with changing regulations
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>Governance, risk management, and compliance (GRC) are some of the most complex, stress-inducing and often misunderstood aspects of the financial services industry. Regulations are always evolving, as are the processes and systems that companies use to do business and manage GRC requirements. </p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>To combat the growing complexity and bolster compliance efforts, emerging technologies like Artificial Intelligence (AI) have proven to be helpful. AI shows promise in helping to manage complicated and changing requirements, automating repetitive tasks, as well as giving human representatives more time to focus on strategic compliance efforts.</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Expert insight</strong></td>
                      <td>“Amidst expanding regulatory requirements, compliance functions are under tremendous pressure to adapt to the risks of today’s ever more interconnected and digitised landscape at speed and scale. Financial services firms are showing a greater willingness than ever before to invest in regulatory risk and compliance programs.”<br><br><a href="https://www.linkedin.com/in/frank-ewing-459a6b69" target="_blank">Frank Ewing</a>, CEO AML Rightsource</td>
                  </tr>
              <tr>
                      <td><strong>By the numbers</strong></td>
                      <td>- <a href="https://www.steel-eye.com/white-papers-and-e-books/annual-compliance-health-check-report-2023" target="_blank">76%</a> of financial services firms have seen increased compliance expenditure over the past year<br>- 39% of financial services business and IT leaders expect IT budgets to grow in 2024 due to regulatory issues or concerns*<br>- Financial institutions globally are paying $206.1 billion on compliance<br>- Organisations that used security AI and automation extensively reported an average of $1.76 million lower data breach costs per company compared to those that didn’t use AI</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                
                                
<div class="t-small ">
<p>* Data source: Financial Services Business Priority Tracker 4Q23 2024 Gartner, Inc. and/or its affiliates. All rights reserved. 807706_C GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">The growing importance of regulatory compliance technology</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Due to increased regulatory requirements and heightened attention to compliance, many financial services companies are increasing their focus on, and investment in, ensuring that they’re compliant. Estimates show that financial institutions will increase their spending on regulatory technology (RegTech) investments by approximately <a href="https://www.juniperresearch.com/research/fintech-payments/fintech-markets/regtech-market-size-report/" target="_blank">124% between 2023 and 2028</a>.</p><p>Driving this additional investment is the increase in fines for noncompliance with record keeping, and other regulations. In a survey, 69% of financial services executives reported that they expect regulators to increase the value of fines for record-keeping breaches.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/insights/how-ai-can-actually-accelerate-compliance-efforts-in-financial-services/5d784a147d-1722602875/blog-how-ai-can-actually-accelerate-compliance-efforts-in-financial-services-the-state-of-financial-services-compliance-in-2024-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>Graphic: <u><a href="https://www.steel-eye.com/white-papers-and-e-books/annual-compliance-health-check-report-2024#:~:text=High%2Dprofile%20cases%20involving%20record,predict%20the%20value%20to%20rise" target="_blank">2024 THE STATE OF FINANCIAL SERVICES COMPLIANCE Annual Compliance Health Check Report</a></u></p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>In February 2024 alone, the <a href="https://fintech.global/2024/02/20/navigating-compliance-the-critical-role-of-record-keeping-in-financial-firms/" target="_blank">SEC imposed $81 million in fines</a> for record-keeping infractions. Globally, fines have been assessed to companies that failed to record or retain electronic communications information. Some of the largest institutions including Wells Fargo, JP Morgan, Goldman Sachs, Morgan Stanley and Citigroup have been punished with fines in the hundreds of millions, with <a href="https://www.nasdaq.com/articles/banks-fined-by-regulators-for-non-adherence-to-record-keeping" target="_blank">total penalties exceeding $2 Billion</a>. </p><p>Factors contributing to the increasing number of regulations and strict enforcement include global geopolitical conflicts and changes, technological advancements in financial industry systems, and the use of <a href="https://fintech.global/2024/03/28/will-2024-be-the-year-of-compliance-technology/" target="_blank">artificial intelligence in financial crime</a>.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Compliance priorities for financial services companies</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Changing circumstances in global markets, developments in technology, and organisation-specific strategies can affect which of the many areas of compliance pose the greatest risk, and therefore require the greatest attention from financial service company leaders.</p><p>Some of the current areas that many organisations are focusing on are: Artificial Intelligence (AI), operational resilience, compliance risk assessment, compliance monitoring, and environmental, social and governance (ESG issues).</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Nearform poll</strong></td>
                      <td>What compliance issues are top of mind for you? <a href="https://share.hsforms.com/10cjNM3QNTges9KmQH5i_3g16461" target="_blank">Share your thoughts</a> in our very short (3-question) poll</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">How RegTech supports compliance</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Compliance and governance are typically already part of most digital systems, and there are a number of different procedures, checks, and monitoring capabilities that assist compliance efforts. These include regulatory compliance checks such as Anti-Money Laundering (AML), Know Your Customer (KYC), General Data Protection Regulation (GDPR), and others. Data encryption and security measures are required to keep customer data secure, and records of all user activities and transactions (audit trails) are required in order to show that organisations are transparent and accountable. </p><p>Beyond the digital applications and modules that support compliance, there are also important actions that employees and staff are required to undertake. These include participation in compliance training and awareness programs, and conducting regular audits and assessments. Well-designed systems and thorough, informed oversight by human auditors can keep organisations from falling out of compliance, but the increasing complexity of regulatory requirements makes it more challenging to keep up with all relevant regulations.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">The present and future of AI-powered GRC </h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>GenAI (generative AI) is proving to be a valuable asset in the area of regulatory compliance. When trained with information about regulations and policies, it can serve as a virtual “expert”, helping to assess compliance by comparing required policies and regulations with company policies, and answering user questions about regulations. Developers are using it as a code accelerator as well, checking code for any misalignment with policy. When it detects possible breaches, it can <a href="https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/how-generative-ai-can-help-banks-manage-risk-and-compliance" target="_blank">alert the necessary people</a>, and detail the situation clearly.</p><p>Additionally, AI can be very good at detecting and mitigating cyberfraud. When trained with lots of data showing what legitimate and fraudulent transactions look like, AI systems can monitor activity, then identify and block potentially fraudulent transactions. It can also communicate with customers to ask for more information or confirm details, which minimises the threat from identity theft, phishing attacks, credit card theft, and document forgery.</p><p>Task automation is another area where AI excels. It speeds up completion of repetitive processes like document verification, data analysis, transaction monitoring and more, and eliminates mistakes caused by human error.</p></div>
                            
                                

                                                                                                
                                
<div class="table t-small">



  <table>
    <thead>
      <tr>
                  <th></th>
                  <th></th>
              </tr>
    </thead>
    <tbody>
              <tr>
                      <td><strong>Nearform expert insight</strong></td>
                      <td>"Maintaining AI policies can be tedious, but leveraging AI removes human drudgery from the process. By continuously updating, indexing, and integrating best practices and regulations, AI eases the burden of policy maintenance. AI-driven policy experts also enhance accessibility and compliance, making policies not only relevant but also user-friendly and effective."<br><br>Joe Szodfridt, Senior Solutions Principal, Nearform</td>
                  </tr>
          </tbody>
  </table>
</div>                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">The flip side of AI: Compliance concerns</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The promise of AI to help support compliance is counterbalanced by concerns about vulnerabilities that it may be used to exploit. AI has the potential to cause harm by facilitating cybercrimes, impersonating humans, and presenting incorrect information as correct. Concerns such as these are driving many governments around the world to consider instituting regulations to govern it. These regulations and national policies are evolving as quickly as the technology itself, so AI-specific updates to existing rules and new additions will soon be in effect.</p><p>Adding to the complexity for international organisations, regulations will differ by country, and sometimes, even within countries or regions. </p><p>For example, while AI policy is still in development in the US, indications are that different agencies will have the ability to <a href="https://www.technologyreview.com/2024/01/08/1086294/four-lessons-from-2023-that-tell-us-where-ai-regulation-is-going/" target="_blank">create their own rules</a>. Having several sets of rules to follow from different agencies or even from different <u><a href="https://fpf.org/blog/colorado-enacts-first-comprehensive-u-s-law-governing-artificial-intelligence-systems/" target="_blank">states</a></u> will add levels of complexity to compliance efforts.</p><p>The UK is seeking to be <a href="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response" target="_blank">pro-innovation</a>, encouraging AI development and setting broad policy guidelines while emphasising five cross-sectoral principles for existing regulators to interpret and apply. Those principles are: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. Since regulators are still in the process of creating a cohesive policy, there’s no specific timeline for when regulations will be released, </p><p>While AI policy is still under development in these countries and others like China, Australia, and Brazil, The EU has produced the world’s first comprehensive law governing AI, which will come into effect in 2024. This law will ban some uses of AI, require companies to be transparent about how they develop and train their models, and hold them accountable for any harm. It will also <a href="https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/" target="_blank">require AI-generated content to be labelled</a>, and create a system for citizens to lodge complaints if they believe they have been harmed by an AI system.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Beyond regulatory compliance, AI applications for financial services companies should focus on broader risk management</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>First and foremost, responsible AI development means ensuring that people are guiding and reviewing the results of AI models. There’s no substitute for human verification that models are responding accurately, giving correct results, and that the data is correct and unbiased.</p><p>In addition to keeping humans in the loop, <a href="https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/how-generative-ai-can-help-banks-manage-risk-and-compliance" target="_blank">McKinsey outlines some steps</a> that financial organisations should take to manage the risks of GenAI: </p><ul><li><p>Ensure that everyone across the organisation is aware of the risks inherent in GenAI, publishing dos and don’ts and setting risk guardrails.</p></li><li><p>Update model identification criteria and model risk policy (in line with regulations such as the EU AI Act) to enable the identification and classification of GenAI models, and have an appropriate risk assessment and control framework in place.</p></li><li><p>Develop GenAI risk and compliance experts who can work directly with frontline development teams on new products and customer journeys.</p></li><li><p>Revisit existing know-your-customer, anti-money laundering, fraud, and cyber controls to ensure that they are still effective in a GenAI-enabled world.</p></li></ul></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Preparing for the coming era of AI-enabled compliance</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Despite valid concerns, Artificial Intelligence can empower financial services companies to more effectively comply with changing regulations. The concerns should not be ignored, but with responsible and transparent development, they can be neutralised. </p><p>As AI continues to grow in importance, overly cautious adoption may lead to competitive disadvantages, while overly rapid implementation could introduce serious compliance risks. Partnering with an experienced AI and data consultancy such as Nearform can ensure that organisations take a balanced approach that is tailored to their needs.</p><p>With a proven track record of <u><a href="https://www.nearform.com/work/setting-solid-foundations-for-real-change/" target="_blank">working with major financial services organisations</a></u> and global enterprises, Nearform helps unlock the value of AI safely and strategically, positioning companies for future success in a rapidly evolving landscape.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Exploring if Bruno is a viable alternative API testing tool to Postman]]></title>
        <link>https://nearform.estd.dev/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman</link>
        <guid>https://nearform.estd.dev/@/page/c9effab4-af71-430a-bd6c-0956b274e288</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Fri, 05 Jul 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/9361a3b068-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

We assess if Bruno is able to do everything Postman does
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>This blog post is aimed at individuals who use <a href="https://www.postman.com/" target="_blank">Postman</a> or any other API development or testing tools in their projects. Throughout this post, we will explore the key features of <a href="https://www.usebruno.com/" target="_blank">Bruno</a>, an alternative tool to Postman, and highlight the significant benefits it brought to our projects.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Postman, a reliable tool for API testing</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>When it comes to API testing, Postman has been the tool of choice for countless organisations. Utilised for its easy-to-use interface, Postman simplifies the API testing process and allows you to incorporate response validation tests using JavaScript, enabling users to ensure the accuracy and reliability of API responses.</p><p>Another big advantage of Postman is its flexibility when using different environments. Whether you’re testing locally, on staging servers, or in production environments, Postman provides a streamlined user experience, allowing users to adapt their testing strategies with ease.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Why look for an alternative to Postman?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>In one of our projects we used GraphQL queries tailored to each environment, and we needed to ensure thorough testing across different setups. We implemented an HTML report feature, offering a visually engaging summary of our findings.</p><p>This report not only facilitated clear communication of our test results to our stakeholders, but also aided in identifying any discrepancies or areas for improvement. We were also able to streamline our testing process by integrating our collection into the CI/CD pipeline using <a href="https://learning.postman.com/docs/collections/using-newman-cli/command-line-integration-with-newman/" target="_blank">Newman</a>. This integration allowed for nightly executions of the collection without encountering any operational issues, enabling continuous monitoring of the services' performance and stability.</p><p>In late 2023, Postman <a href="https://blog.postman.com/announcing-new-lightweight-postman-api-client" target="_blank">announced</a> it was making some significant modifications to its pricing and functionality structure. We were using the free desktop version, now known as the lightweight API, which became impractical for our needs.</p><p>While the prospect of upgrading to the enterprise version to access the necessary functionalities seemed plausible, it unveiled a dealbreaker for our client: data security.</p><p>Transitioning to the enterprise version mandated data synchronisation with the cloud, a feature that clashed with our client's security protocols. Further compounding this challenge was the absence of an offline version in Postman's offerings, except for the lightweight API. Consequently, we had to identify a suitable alternative tool that could accommodate our testing needs while adhering to our client's security standards.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Introducing Bruno</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Bruno is an alternative to Postman which offers some key features that immediately appeared useful for our migration activities:</p><ol><li><p>User interface</p></li><li><p>Importing collections</p></li><li><p>Assertions</p></li><li><p>Visual Studio Code extension</p></li><li><p>Secrets</p></li></ol></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>User interface</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Bruno’s UI is very similar to Postman’s. If you’re used to using Postman, you’ll know straight away where everything is. Postman’s UI was working; it flows, so it seems like Bruno was sticking to that flow.</p><p>In Bruno's 'Settings' section, you'll discover a feature that allows you to include scripts for acquiring access tokens or executing other prerequisite actions. Additionally, you can select the authentication mode for the entire collection. Interestingly, this functionality now appears to be absent from Postman's free version, marking a notable advantage for Bruno users.</p><p>When it comes to the query section, Bruno closely resembles Postman, with one notable addition: the 'Assertion' section. This extra component provides users with a dedicated space to define assertions, allowing for more comprehensive and efficient testing. We'll delve deeper into this feature later on.</p></div>
                            
                                

                                                                                                
                                
<div class="t-large ">
<h3>Importing collections</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>The ability to import existing collections from Postman proved invaluable, particularly given our prior investment in Postman. Using this functionality, we migrated our extensive collection from Postman to Bruno.</p><p>However, it's worth noting that some adjustments were necessary during the migration process. Postman’s folder section of pre-script did not get imported, so we had to rewrite it, which we needed to acquire access tokens for our APIs. Additionally, Bruno employs its own set of keywords and conventions, necessitating updates to align our tests to keep them working.</p><p>Nevertheless, despite these minor modifications, the migration process was smooth and efficient, enabling us to swiftly transition our testing infrastructure to Bruno's platform without any impact on the validity or functionality of our test suites.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/94299021aa-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-importing-collections-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Assertions</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>As a test engineer, I loved this feature that Postman does not have. It allows users to effortlessly build tests one after another, simplifying the testing process significantly.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/4ff88e6a60-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-importing-collections-films-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>Despite my appreciation for this feature, its utilisation was not necessary for our project at the time. Given that we already had an extensive collection established in Postman, our approach involved importing this existing collection into Bruno and subsequently updating it to align with Bruno.</p><p>The screenshot below shows Postman’s tests that we imported into Bruno. As you can see, you cannot use them as they are. We updated all our tests using <a href="https://en.wikipedia.org/wiki/Grep" target="_blank">grep</a> which took only a small amount of time to make them executable again.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/8fbb24c045-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-importing-collections-test-comparison-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Visual Studio Code extension</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Postman offered an 'export' function, enabling users to work with exported Postman collections in Visual Studio Code. However, the current version has removed this functionality, making it impossible to share collections through Visual Studio Code.</p><p>Bruno comes with Visual Studio Extension. What truly set this feature apart was its synchronisation with the UI. Any modifications made within the UI were automatically reflected in the corresponding files. All the essential components, header data, request body, variables, and tests, were neatly organised in one file.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/4b031bd0d7-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-importing-collections-visual-sudio-code-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-large ">
<h3>Secrets</h3></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Within Bruno’s configuration settings, users have the capability to hide the values of variables designated as sensitive.</p><p>This functionality ensures that these values remain safeguarded within the environment settings. Consequently, they are accessible for local usage while being shielded from inadvertent exposure when pushing changes to remote repositories. The ability to hide secret values in this manner provides an added layer of security and compliance, aligning with industry best practices for safeguarding sensitive data.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman/9c9caf8fc8-1725542994/blog-exploring-if-bruno-is-a-viable-alternative-api-testing-tool-to-postman-env-file-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Did Bruno replace Postman successfully?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The short answer is yes; it did for our project. Bruno was able to do everything Postman did.</p><p>The longer answer is that during the initial phases of our migration, we encountered a hurdle in extracting environment variables within our CI/CD pipeline. This posed a significant challenge, as integration with our automated pipeline was crucial for maintaining testing efficiency and workflow continuity.</p><p>Fortunately, Bruno's status as a relatively new tool with an engaged community proved to be advantageous. The platform undergoes frequent updates, driven by user feedback and community collaboration. As luck would have it, one of the recent updates addressed the issue we encountered, providing the solution needed to proceed with migrating our Postman collections to Bruno.</p><p>In addition, Bruno's user interface proved to be intuitive and user-friendly, particularly for those accustomed to Postman. The integration with a VS Code Extension enabled collaboration and sharing among team members, encouraging more engineers to actively contribute to the collection.</p><p>Previously, the responsibility for updating the collection primarily fell on test engineers in our project. However, post-migration developers became more involved in updating the collections alongside their application code changes, as both files resided in the same repository. This shift fostered greater collaboration between developers and test engineers, resulting in a smoother and more efficient testing process.</p><p>Following the migration, our API tests continued to run without issues. The increased collaboration between developers and test engineers contributed to the overall success of the migration. Encouraged by the smooth migration and the continuous evolution of Bruno, we recommend you give Bruno a try.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Nearform's Open Source workshops]]></title>
        <link>https://nearform.estd.dev/digital-community/nearform-s-open-source-workshops</link>
        <guid>https://nearform.estd.dev/@/page/e9c3322e-f340-405f-8f8d-98de274d47b7</guid>

      
                    <category>Digital Community</category>
              
    

        <pubDate>Tue, 25 Jun 2024 00:00:00 +0000</pubDate>
            <author>
                        </author>
                            <media:content url="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/7c2b5a50d2-1725542994/blog-nearform-open-source-workshops-image-500x300-crop-q80.png" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

Nearform has curated an impressive array of workshops covering a diverse range of topics
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>Are you a developer eager to expand your skill set or dive into cutting-edge technologies? Look no further than Nearform's collection of open-source workshops! </p></div>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>With a commitment to promoting learning within the developer community, Nearform has curated an impressive array of workshops covering a diverse range of topics, from backend frameworks like Fastify to security essentials like OWASP Top Ten and everything in between.</p><p>Each workshop is designed to combine theoretical knowledge with hands-on practice, ensuring participants gain a comprehensive understanding of the topic at hand. Participants are actively engaged in writing code and running tests to validate their understanding and mastery of the concepts covered.</p><p>With solutions readily available, participants can confidently solve each exercise knowing that assistance is just a step away. Let's take a closer look at some of the workshops.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Fastify Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p><a href="https://fastify.dev/" target="_blank">Fastify</a>, a web framework for Node.js, boasts lightning-fast performance and a minimalist design. <a href="https://github.com/nearform/the-fastify-workshop" target="_blank">Nearform's</a><a href="https://github.com/nearform/the-fastify-workshop"> </a><a href="https://github.com/nearform/the-fastify-workshop" target="_blank">Fastify Workshop</a> provides a comprehensive introduction to building web applications with Fastify. Whether you're a beginner or an experienced developer, this workshop offers valuable insights and hands-on exercises to master Fastify's features and best practices. </p><p>Here is an example of an exercise you can find in the workshop:</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/4ffea2662a-1725542994/blog-nearform-open-source-workshops-1-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>If need some help, you can find the solution in the source code:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b24736">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b24736" class="language-js">import S from &#039;fluent-json-schema&#039;

const schema = {
  body: S.object()
    .prop(&#039;username&#039;, S.string().required())
    .prop(&#039;password&#039;, S.string().required()),
}

export default async function login(fastify) {
  fastify.post(
    &#039;/login&#039;,
    { schema },
    async req =&gt; {
      const { username, password } = req.body
      return { username, password }
    },
  )
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>And then you can test it with Postman or Curl:</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/c53535fb12-1725542994/blog-nearform-open-source-workshops-6-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">OWASP Top Ten Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Security is paramount in today's digital landscape, and <a href="https://github.com/nearform/owasp-top-ten-workshop" target="_blank">Nearform's OWASP Top Ten Workshop</a> equips developers with the essential knowledge to identify and mitigate common security vulnerabilities. By exploring the OWASP Top Ten risks, participants learn how to secure their applications and protect against threats effectively.</p><p>For example, in this exercise you have to fix a snippet of code with a <a href="https://owasp.org/www-community/attacks/SQL_Injection" target="_blank">SQL Injection </a>vulnerability:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b24c70">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b24c70" class="language-js">import errors from &#039;http-errors&#039;

async function customer(fastify) {
  fastify.get(
    &#039;/customer&#039;,
    {
      onRequest: [fastify.authenticate]
    },
    async req =&gt; {
      const { name } = req.query
      const { rows: customers } = await fastify.pg.query(
        `SELECT * FROM customers WHERE name=&#039;${name}&#039;`
      )
      if (!customers.length) throw errors.NotFound()
      return customers
    }
  )
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>Let’s fix it by sanitising the user input:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b24c8f">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b24c8f" class="language-js">import SQL from &#039;@nearform/sql&#039;
import errors from &#039;http-errors&#039;

export default async function customer(fastify) {
  fastify.get(
    &#039;/customer&#039;,
    {
      onRequest: [fastify.authenticate]
    },
    async req =&gt; {
      const { name } = req.query
      const { rows: customers } = await fastify.pg.query(
        SQL`SELECT * FROM customers WHERE name=${name}` // SQL function from @nearform/sql
      )
      if (!customers.length) throw errors.NotFound()
      return customers
    }
  )
}</code></pre>
</figure>
                            
                                

                                                                                                
                                
<div class="t-small ">
<p>To make sure you have fixed the vulnerability correctly, you can run <code>npm run verify</code> , which will test that your solutions address the security issue in the code.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">GraphQL Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>GraphQL has revolutionised the way developers design and query APIs, offering flexibility and efficiency unmatched by traditional RESTful approaches. <a href="https://github.com/nearform/the-graphql-workshop" target="_blank">Nearform's GraphQL Workshop</a> demystifies this powerful technology, guiding developers through the fundamentals of schema design, querying, and optimisation techniques.</p><p>Loaders are an amazing feature of GraphQL, in this exercise you will learn how to use them correctly:</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/c3ad8d9bc7-1725542994/blog-nearform-open-source-workshops-2-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                
                                
<div class="t-small ">
<p>And then you can check the solution and run the test:</p></div>
                            
                                

                                                                                                
                                <figure class="code_container">
        

    <figcaption>
        <span>js</span> <!-- Display the languageTitle or languageCode -->
        <button class="copy-button" data-clipboard-target="#code_block_69d0b91b25166">Copy to clipboard</button>
    </figcaption>
    <pre><code id="code_block_69d0b91b25166" class="language-js">const pets = [
  {
    name: &#039;Max&#039;
  },
  {
    name: &#039;Charlie&#039;
  }
]

const owners = {
  Max: {
    name: &#039;Jennifer&#039;
  },
  Charlie: {
    name: &#039;Simon&#039;
  }
}

const schema = `
  type Person {
    name: String!
  }

  type Pet {
    name: String!
    owner: Person
  }

  type Query {
    pets: [Pet]
  }
`

const resolvers = {
  Query: {
    pets() {
      return pets
    }
  }
}

const loaders = {
  Pet: {
    async owner(queries) {
      return queries.map(({ obj: pet }) =&gt; owners[pet.name])
    }
  }
}

export { schema, resolvers, loaders }</code></pre>
</figure>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Node Test Runner Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Testing is an integral part of the software development process, ensuring the reliability and robustness of your applications.</p><p>Node.js released an <a href="https://nodejs.org/en/blog/announcements/v18-release-announce#test-runner-module-experimental" target="_blank">experimental test runner in version 18</a> and made that <a href="https://nodejs.org/en/blog/announcements/v20-release-announce#stable-test-runner" target="_blank">test runner stable in version 20</a>, allowing developers to test their applications without the need for external dependencies.</p><p><a href="https://github.com/nearform/node-test-runner-workshop" target="_blank">Nearform's Node Test Runner Workshop</a> introduces developers to various testing methodologies and tools, empowering them to write comprehensive test suites and automate testing workflows effectively.</p><p>In this slide we explain the difference between a Test Runner and a Testing Framework:</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/c0f5e6d42e-1725542994/blog-nearform-open-source-workshops-3-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Micro Frontends Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Micro Frontends architecture enables teams to independently develop, deploy, and scale frontend components, fostering agility and collaboration in large-scale projects. <a href="https://github.com/nearform/the-micro-frontends-workshop" target="_blank">Nearform's Micro Frontends Workshop</a> guides developers through the principles and implementation strategies of micro frontend architecture, equipping them with the knowledge to build modular and scalable frontend systems.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/b8733f0e86-1725542994/blog-nearform-open-source-workshops-4-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">React Native Workshop</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>With the rise of mobile app development, React Native has emerged as a popular framework for building cross-platform applications with JavaScript and React. <a href="https://github.com/nearform/react-native-workshop" target="_blank">Nearform's React Native Workshop</a><a href="https://github.com/nearform/the-micro-frontends-workshop"> </a>empowers developers to harness the full potential of React Native, from setting up development environments to building native-quality mobile experiences.</p></div>
                            
                                

                                                                                                                                                                                                <figure>
                                        <img src="https://nearform.estd.dev/media/pages/digital-community/nearform-s-open-source-workshops/655b73914b-1725542994/blog-nearform-open-source-workshops-5-500x300-crop-q80.png" style="width: 100%; height: auto; margin-top: 20px; margin-bottom: 10px;" alt="" />
                                    </figure>
                                                                
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Immerse yourself in Nearform’s workshops</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Whether you're diving into backend development, exploring emerging technologies, or fortifying your application's security, Nearform's open-source workshops provide invaluable resources and guidance. Immerse yourself in these workshops, join a vibrant developer community and commence a journey of perpetual learning and advancement.</p><p><a href="https://github.com/search?q=topic:workshop+org:nearform+fork:true&amp;type=repositories" target="_blank">Here</a> you can find the list to all our workshops.</p><p>So, which workshop will you venture into first? Let's embark on this learning journey together!</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
      <item>
        <title><![CDATA[Strategic insights on the new era of digital business transformation: An interview]]></title>
        <link>https://nearform.estd.dev/insights/strategic-insights-on-the-new-era-of-digital-business-transformation-an-interview</link>
        <guid>https://nearform.estd.dev/@/page/a5f7fce2-884c-45b9-a0e8-d318ce8561f7</guid>

      
                    <category>Insights</category>
              
    

        <pubDate>Tue, 25 Jun 2024 00:00:00 +0000</pubDate>
            <author>
            Damo Girling            </author>
                            <media:content url="https://nearform.estd.dev/media/pages/insights/strategic-insights-on-the-new-era-of-digital-business-transformation-an-interview/38eb84f201-1722602875/blog-strategic-insights-on-the-new-era-of-digital-business-transformation-an-interview-pic-500x300-crop-q80.jpg" type="image/webp" medium="image" duration="10"> </media:content>
            
            <description>
                
            <![CDATA[
            <h2>

“At its root, AI is a user experience strategy. It requires getting close to and understanding use cases and desired impacts.” — Peri Kadaster, Chief Marketing Officer at Nearform
</h2>
                                                                                                                                        
                                
<div class="t-large ">
<p>Nearform’s Chief Marketing Officer, Peri Kadaster, shares the expert insights she’s gained from her 15-year career as a tech leader. She discusses how business leaders can successfully leverage AI, Nearform’s unique approach to helping organisations successfully navigate their transformation journeys, her experiences as a woman in tech and more. </p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Who are you and what’s your background?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>I am a strategy consultant-turned-marketer. I started my career at companies like The Parthenon Group (now Parthenon-EY) and McKinsey &amp; Company — where I focused my time on the tech &amp; startup industry, doing marketing strategy. </p><p>I shifted gears to working in tech about 15 years ago, first doing mobile user acquisition as VP of  Marketing &amp; Analytics at CoffeeTable, an e-commerce startup in San Francisco, and later shifting gears to do B2B marketing, with leadership roles at a Turkish fintech and later at McKinsey Digital Labs. </p><p>I was thrilled to join Nearform in the summer of 2023 as Chief Marketing Officer.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">You were a speaker at the 2024 Dublin Tech Summit, what was that experience like?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Speaking at the <a href="https://dublintechsummit.tech/" target="_blank">Dublin Tech Summit</a> was an amazing experience. It was great to bring together diverse perspectives on timely topics. I was lucky enough to speak on a panel with leaders from NASA, Microsoft and many other leading organisations from Europe and around the world.</p><p>Something we agreed on is the fact that digital transformation, as we know it or have known it in recent years, is fundamentally about to change — and a big driver of that is the advent of AI.</p><p>AI is going to change the way we do everything. It's going to change how companies like Nearform develop software, products and platforms, and how we work with data. It's also going to change how we serve clients, who include enterprises from sectors such as financial services, telco, healthcare and more. </p><p>From a marketing perspective, the way customers receive services and get value from companies, and the way brands communicate, all of this is about to change with how AI is incorporated. A lot of that brings good news. You can see the potential for efficiency. You can see the potential for customers to get served with what they need in a faster way and, potentially, a better, more accurate way. You'll see efficiency gains in terms of production and the supply side. </p><p>But with it are also inherent risks. There are concerns about data, privacy and security. There are certainly concerns around regulation, which currently is a patchwork of different approaches across different geographies. For example, the EU is about to pass laws that are very different from what the US is passing federally, which is complemented by what specific US states are passing separately. </p><p>So, while it's an exciting area, and one that will fundamentally change how enterprises engage in business transformation, it's also one to really be vigilant about and have an approach that integrates risk mitigation along with execution in parallel.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">What was the key insight you gained from each of your fellow speakers at the Dublin Tech Summit?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Each of us brought different perspectives to the Dublin Tech Summit.</p><p>One of the speakers was a digital leader from NASA who has been there for 25 years. It was interesting to hear about the similarities, as well as differences, in the issues they face. </p><p>A lot of the emphasis was on the importance of having a business wrapper around technological initiatives. This is key because there is a risk of developing technology for technology's sake — creating new features that weren't available before, but now can be delivered, simply because they're feasible. But in the absence of a business lens, this raises the risk of creating features that won't be fundamentally adopted by the market, by users, by consumers, by employees etc. </p><p>I thought he brought a really interesting perspective in terms of that requirement, of the intersection of business and technology, one that aligns with Nearform’s approach of collaborating with both digital and business leaders across functions.</p><p>Another one of the speakers made a point about the importance of people in the transformation process. We talk about AI as a technology and as a digital initiative. But, by and large, it's actually a <em>people</em> project, because so much of what AI enables and or requires is change in processes. It will change the skills required to complete certain tasks and fulfil certain roles, but it also will require a redesign of processes and ways of working.</p><p>All of that requires behaviour change. As anyone who has worked on transformation initiatives with companies knows, a big goal is to change people's existing habits — break old habits and create new habits. </p><p>I think the intersection of those two perspectives was really interesting.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">What defined the previous era of transformation strategies?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>The previous age was very much focused on thinking of digital transformation as a separate but parallel initiative to what the business objectives and business strategy are. </p><p>Organisationally, CIOs, CTOs and CDOs were often very siloed from the business units themselves. So tech decisions would be seen as large investment decisions, and large capital outlays, the impact of which would oftentimes be measured at the business unit level. Sometimes at the corporate level, in the case of corporate strategy teams, and so forth. But they were seen as disparate streams of work.</p><p>Now business and technology are increasingly intertwined, so the key is to leverage today's technology in service of your business objectives. That means organisations are increasingly on the front foot of technology tools. Put another way, transformation is now multiple layers.</p><p>Digital transformation cannot be digital alone. It's almost a misnomer. The real challenge of any transformation effort is that it requires people to adapt their behaviour. If you underestimate the “people” aspect, both in terms of buy-in for the adoption of new tools as well as the capability building required to really leverage and get the most out of the investment and new tools, you won't be successful — no matter how great that shiny new product or platform may be.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Where do data and AI fit into this new era?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>There are a number of key technologies. Obviously data, and AI is the one that’s top of mind for everyone. But what AI means is different for each organisation, and it's a moving target.</p><p>At Nearform, we tell our clients we meet you where you are. This is because there are many factors that influence where an organisation sits on the data and on the AI maturity curves. The reason we say this is AI is not one point in time, it's not even a linear goal. It's generally the culmination of a multi-step process that's required. </p><p>This starts from data engineering, just getting the plumbing working and having the data “pipes” around different parts of the organisation talking to each other. It then goes to enabling the data science required to unlock the analytics that can drive business intelligence, and ultimately insights and actionable decisions you can take.</p><p>That moves up to AI, where the human is no longer necessarily the decision maker. But the technology can take on that role. And all along the way are other critical areas, like data governance, reliability, security and much more.</p><p>Again, it's this notion of technology, capability and strategy all coming together as one.</p><p>Another consideration is: “Who is the end user of AI?” There are so many applications, and one that always comes to mind is the notion of chatbots and how they can serve customers. But you also have to think about internal applications of AI — for employees. There's an urgent need to reevaluate how work is done in the context of AI.</p><p>At its root, AI is a user experience strategy. It requires getting close to and understanding use cases and desired impacts — as opposed to technology for technology's sake or creating features for feature's sake. It's taking a holistic view of how that particular user is incentivised and is trained to operate.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">How can business leaders ensure they're investing in the right areas?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>There's increasingly this competitive dynamic between enterprises and newly emerging challenger brands. We see a lot of this in the financial services industry, with new branchless banks for example. We see that in numerous other industries as well. </p><p>Investment can and should be made as close to the user as possible. As I mentioned earlier, individual business units within an organisation are increasingly involved in investment decisions around technology. This enables existing enterprises to reinvent themselves at a more rapid pace than was previously expected. Plus, they have the added advantage of having legacy knowledge and expertise, as well as a track record of success. </p><p>While emerging technology may be seen as democratising access to markets for new brands, it can also provide new competitive advantages for existing brands too.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Should people be afraid of this new era of transformation?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>Fear of change is a natural human response, but I would say no in the short term. In fact, a lot of the initial disruption we're seeing is in the most frustrating or annoying — and least sexy — areas. </p><p>For instance, I angel invest in American and European startups. I remember one of my first investments some years ago was for a company that helps digitise the note-taking process and elderly caretaker settings. The environment and the processes were not exactly cutting-edge. However, the user insights and the data that emerged helped develop a tool that led to a step function change. This change wasn’t just in the note-taking process, but also the employee engagement, as well as the patient care level.</p><p>I think the same is true today — when thinking about where to embed new technology in the day-to-day, that's different. You look first at: “What is it that people complain about?” It could be HR processes. (In my case, it's expense processes!)</p><p>There are different ways in which people can embrace new technologies, specifically AI-powered efforts that aren't scary and are actually really helpful in the day-to-day. For example, in my daily work, one of my colleagues on the Digital Marketing Team collaborated with one of our AI experts to personalise and automate a subset of our outreach messages. </p><p>Something like that is a tremendous step forward in terms of providing the right messaging, at the right time, to the right person to help make our efforts more effective and helpful.</p><p>However, there do need to be clear steps taken towards risk mitigation. Again, that's the notion of data governance, regulatory compliance, cybersecurity etc. Those are all steps that need to be taken before and/or in parallel with embracing these new technology uses to ensure there's no need to be afraid, either now or in the future.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">What's Nearform’s unique approach to helping organisations along this journey?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>At Nearform, we say we meet you where you are. There are numerous factors that influence where an organisation sits on the data and AI maturity curve, so we use our experience to develop impactful solutions along that entire process.</p><p>For instance, we recently worked with a utilities company in South America. Nearform used AI to optimise the chemical facility operation based at a remote location to provide data that was expected to come from the facility even before it was operational. Our team built a digital twin of the physical facility, which enabled the new tech to go live before the facility launched. Within weeks, the Nearform team defined an end-to-end data flow, launched the platform and produced a functional prototype. And we followed that by delivering the full solution.</p><p>What I love about that example is it combines the best of what Nearform does. We do product, platform, data and AI as well as capability building to ensure our clients have a resolution at the end of an engagement that enables them and sets them up for success, moving forward after we're gone.</p><p>One of the things that really differentiates Nearform is our track record of success. We've been around since 2011, serving the biggest enterprises, governments and nonprofit organisations all over the world. We've improved the quality of patient care. We've helped governments get societies out of the COVID-19 pandemic, through data and through COVID tracking and so forth. We help airlines with their reservation system, so people don't get booted off of flights and things like that.</p><p>A lot of that has to do with the fact we only hire senior engineers, so our teams can get started and are able to get to results much faster than the average tech team. </p><p>We also have a legacy of <u><a href="https://www.nearform.com/open-source/" target="_blank">Open Source</a></u> contributions. Nearform is one of the largest Open Source contributors in the world, and we're able to leverage that knowledge and those resources and tools to our client efforts. It’s another accelerant for getting time to value as quickly as possible.</p><p>Something else most people don't talk about is the fact that Nearform has a team most companies <em>want</em> to work with. People tell us: “Not only is your developer team sharp, not only are your designers and product managers collaborative, but they're really <em>nice</em> too .”</p><p>We bring a culture of shared success, and one of being on the same team, with our clients, as opposed to being seen as an outside vendor. We are truly a partner to our clients, one that’s on the same side of the room, whiteboarding together, working together, and having a human connection, in addition to the technology we deliver.</p></div>
                            
                                

                                                                                                                                                                
                                <h2 class="t-3xl ">Can you share your experiences as a woman working in tech?</h2>
                            
                                

                                                                                                                                    
                                
<div class="t-small ">
<p>I've been a woman working in tech for many years, and I've worked in Silicon Valley as well as across the US and throughout Europe. I've seen or been in many situations that, in hindsight, may have been impacted by my gender, both good and bad.</p><p>I’ll start with where I am today, as CMO of Nearform. Being at Nearform has been an amazing experience as a woman in tech. I work on a majority female team, and sit on an executive team that has significant — at parity — female representation. We have an active Women's Guild that provides ongoing support, opportunities for learning, exclusive speakers and more. So I feel really fortunate to work in an environment where women are not only treated equally, but also given the space to celebrate our uniqueness.</p><p>I think a lot of diversity starts with seeing yourself in the roles you want to have. I've been very lucky to work at several companies where women comprise half or even more of the leadership team, and that's always been inspiring.</p><p>One thing I'd say is mentors are incredibly helpful for women in tech. As you know, many cities or sectors have close-knit ecosystems, and networking is critical to meet new people, as potential employers, clients, partners, or more. I personally have both male and female mentors, which I find helpful. It’s vital to have someone you can not only relate to, but who also brings a unique point of view and helps you mitigate your own blind spots.</p><p>Of course, there exists a reality where different individuals or groups may not have the same opportunities as others due to any number of reasons. I'd also say that, from a framing perspective, limits are temporary. Hearing “A woman's never done X,” or “There's no way we'll get to equal representation from Y per cent” is an antiquated and, frankly, damaging way of thinking. </p><p>I’ve certainly faced my share of subtle or outright discriminatory comments and situations — and it’s up to us as women to find ways to not only speak truth to power in these individual instances, but also to internalise and learn from these experiences. As a result, we as women have strength beyond what we can at times imagine.</p><p>I truly believe there are no limits to what a woman can do, and often the first step to achieving a specific goal is that general ethos and mindset.</p></div>
                            
                                

                                                                                                                                                                                                ]]>
        </description>
    </item>
  </channel>
</rss>