Tag: Architecture
Posts
Scaling Up: A Guide to Building High-Volume Websites with Leading Cloud Platforms
The modern web demands websites capable of handling vast user bases, processing immense data volumes, and delivering unparalleled performance. Cloud platforms have emerged as essential tools for achieving this scalability, offering a robust infrastructure and a diverse set of features to empower website development. This article explores four leading cloud providers - AWS, GCP, Railway, Vercel, and Render - highlighting their strengths in building and scaling high-volume websites.
1. AWS: The Enterprise-Grade Solution
read moreTag: Cloud
Posts
Scaling Up: A Guide to Building High-Volume Websites with Leading Cloud Platforms
The modern web demands websites capable of handling vast user bases, processing immense data volumes, and delivering unparalleled performance. Cloud platforms have emerged as essential tools for achieving this scalability, offering a robust infrastructure and a diverse set of features to empower website development. This article explores four leading cloud providers - AWS, GCP, Railway, Vercel, and Render - highlighting their strengths in building and scaling high-volume websites.
1. AWS: The Enterprise-Grade Solution
read morePosts
Securing Your Google Kubernetes Engine Clusters from a Critical Vulnerability
Google Kubernetes Engine (GKE) is a popular container orchestration platform that allows developers to deploy and manage containerized applications at scale. However, a recent security vulnerability has been discovered in GKE that could allow attackers to gain access to clusters and steal data or launch denial-of-service attacks.
The vulnerability is caused by a misunderstanding about the system:authenticated group, which includes any Google account with a valid login. This group can be assigned overly permissive roles, such as cluster-admin, which gives attackers full control over a GKE cluster.
read morePosts
How to Mitigate Intraday Settlement Risk
Navigating the Rapids: How to Mitigate Intraday Settlement Risk In the fast-paced world of finance, even minor hiccups can have significant consequences. One such risk, intraday settlement risk, poses a constant challenge for banks and financial institutions. But what exactly is it, and how can institutions effectively manage this risk?
Understanding Intraday Settlement Risk
Intraday settlement risk refers to the potential inability to meet payment obligations at the expected time within a single business day.
read morePosts
AWS Fargate vs. non-Fargate
Fargate vs. Non-Fargate: Choosing the Right Container Orchestration Strategy for Your Needs
In the age of cloud computing, containers have become the go-to solution for deploying and scaling applications. And when it comes to container orchestration on AWS, the two main options are Fargate and non-Fargate (which typically involves Amazon EC2 instances and Amazon ECS). But which one is right for you?
What is Fargate?
Fargate is a serverless compute engine for Amazon ECS that allows you to run containers without having to provision or manage underlying EC2 instances.
more detailsPosts
Google Cloud Run vs AWS App Runner
AWS App Runner and Google Cloud Run are two serverless computing platforms that can help you deploy and run containerized applications without having to worry about servers. Both platforms are relatively new, but they have quickly become popular choices for developers.
What are the similarities?
Both platforms are serverless, meaning that you don’t have to provision or manage servers. The platforms will automatically scale your application up or down based on demand, so you only pay for the resources that you use.
read morePosts
Google Cloud Dataflow and Azure Stream Analytics
Google Cloud Dataflow and Azure Stream Analytics are both cloud-based streaming data processing services. They offer similar features, but there are some key differences between the two platforms.
Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. It is designed to scale automatically based on the data processing needs. Dataflow also offers various security features including IAM (Identity and Access Management), encryption, and audit logging.
read morePosts
Machine Learning Ops (MLOps)
MLOps stands for Machine Learning Operations. It is a set of practices that combines machine learning, DevOps, and IT operations to automate the end-to-end machine learning lifecycle, from data preparation to model deployment and monitoring.
The goal of MLOps is to make it easier to deploy and maintain machine learning models in production, while ensuring that they are reliable and efficient. MLOps can help to improve the quality of machine learning models, reduce the time it takes to get them into production, and make it easier to scale machine learning applications.
read morePosts
GCP and Azure networking
Azure networking and GCP networking are both comprehensive cloud networking services that offer a wide range of features and capabilities. However, there are some key differences between the two platforms.
Azure networking offers a more traditional networking model, with a focus on virtual networks (VNets), subnets, and network security groups (NSGs). VNets are isolated networks that can be used to group together resources, such as virtual machines (VMs), storage, and applications.
read morePosts
BigQuery ML Example
Here is an example of how to use BigQuery ML on a public dataset to create a logistic regression model to predict whether a user will click on an ad:
# Import the BigQuery ML library from google.cloud import bigquery from google.cloud.bigquery import Model # Get the dataset and table dataset = bigquery.Dataset("bigquery-public-data.samples.churn") table = dataset.table("churn") # Create a model model = Model('my_model', model_type='logistic_regression', input_label_column='churn', input_features_columns=['tenure', 'contract', 'monthly_charges']) # Train the model model.
read morePosts
Monitor Costs in Azure
There are a few ways to monitor costs in Azure. One way is to use the Azure Cost Management + Billing portal. This portal provides a graphical interface that you can use to view your costs over time, track your spending against budgets, and identify areas where you can save money.
Another way to monitor costs is to use the Azure Cost Management API. This API allows you to programmatically access your cost data and integrate it with other systems.
read morePosts
MLOps with Kubeflow
Kubeflow is an open-source platform for machine learning and MLOps on Kubernetes. It provides a set of tools and components that make it easy to deploy, manage, and scale machine learning workflows on Kubernetes.
Kubeflow includes a variety of components, including:
Notebooks: A Jupyter notebook service that allows data scientists to develop and experiment with machine learning models.
Pipelines: A tool for building and deploying machine learning pipelines.
Experimentation: A tool for tracking and managing machine learning experiments.
read morePosts
confluent kafka vs apache beam
Confluent Kafka and Apache Beam are both open-source platforms for streaming data. However, they have different strengths and weaknesses.
Confluent Kafka is a distributed streaming platform that is used to store and process large amounts of data in real time. It is a good choice for applications that require high throughput and low latency. Kafka is also a good choice for applications that need to be fault-tolerant and scalable.
Apache Beam is a unified programming model for batch and streaming data processing.
read morePosts
AWS Lambda and GCP Cloud
AWS Lambda and Google Cloud Run are both serverless computing platforms that allow you to run code without provisioning or managing servers. However, there are some key differences between the two platforms:
Supported languages: AWS Lambda supports a wide range of programming languages including Node.js, Java, Python, Go, Ruby, and C#. Cloud Run supports Docker images, which can be written in any language. Cold start: When a Lambda function is first invoked, it takes a few milliseconds to start up.
read morePosts
Cloud gotchas 2
Serverless Serverless is great. You create your services and hand them over to AWS Lambda/GCP Cloud Run/Azure Functions and let them rip. Your system can scale up to hundreds of instances and quickly service your clients. However, you must consider
how will your downstream clients respond to such peaks in volume? Will they be able to cope? how must will auto-scaling cost? how portable is your code between serverless platforms? how will you handle bugs in the serverless platform?
read morePosts
Azure create K8 cluster
Here is a Terraform file that you can use to create a Kubernetes cluster in Azure:
provider "azurerm" { version = "~> 3.70.0" subscription_id = var.azure_subscription_id client_id = var.azure_client_id client_secret = var.azure_client_secret tenant_id = var.azure_tenant_id } resource "azurerm_resource_group" "aks_cluster" { name = var.resource_group_name location = var.location } resource "azurerm_kubernetes_cluster" "aks_cluster" { name = var.aks_cluster_name location = azurerm_resource_group.aks_cluster.location resource_group_name = azurerm_resource_group.aks_cluster.name node_count = 3 vm_size = "Standard_D2s_v3" network_profile { kubernetes_network_interface_id = azurerm_network_interface.
read morePosts
AWS vs Azure vs GCP
AWS, Azure, and GCP are the three leading cloud computing platforms in the market. They offer a wide range of services, including compute, storage, databases, networking, machine learning, and artificial intelligence.
Here are some of the key differences between the three platforms:
Market share: AWS is the market leader, with a 33% market share in 2022. Azure is second with a 22% market share, and GCP is third with a 9% market share.
read morePosts
Cloud gotchas 1
Since 2017 I’ve been involved in a wide variety of “cloud” projects and there’s some common myths I’ve observed.
Migrations are just containers Change is hard and unless you’re working for a startup, most cloud transformations start as lift and shift exercises. Contracts have been signed and everyone has been sold the myth that all you need to do is “dockerise” your containers and away you go.
Unfortunately, most of the hyperscalers (cloud provider - GCP, AWS, Azure, etc) will dazzle you with the way they’ve been doing things for years and just tell you and will instruct you to “do as they say”.
read morePosts
BigQuery ML and Vertex AI Generative AI
BigQuery ML and Vertex AI Generative AI (GenAI) are both machine learning (ML) services that can be used to build and deploy ML models. However, there are some key differences between the two services.
BigQuery ML: BigQuery ML is a fully managed ML service that allows you to build and deploy ML models without having to manage any infrastructure. BigQuery ML uses the same machine learning algorithms as Vertex AI, but it does not offer the same level of flexibility or control.
read morePosts
Predict the stock market
The premise was simple. Use “big” data analytics and machine learning models to predict the movement of stock prices. However, we had really “dirty” data and our Data Scientists were stuggling to seperate the noise from the signals. We spent a lot of time cleaning the data and introducing good old principles like “how can I run the model somewhere over than a laptop?”. This was a true startup, a bunch of people in a room trying to get stuff working.
read morePosts
Pushing the limits of the Google Cloud Platform
This one is better explained with the presentation below. If you want to learn how to run quantitative analytics at scale, it’s well worth a watch.
read moreTag: Beam
Posts
Simplify Error Handling In Apache Beam With Asgarde
As a data engineer, you’re likely familiar with the challenges of error handling in Apache Beam Java applications. Traditional approaches can lead to verbose code, making it difficult to read and maintain. The Asgarde library offers a solution by providing a way to write less code and produce more concise and expressive code.
What is Asgarde?
Asgarde is an open-source library that simplifies error handling in Apache Beam Java applications. It accomplishes this by wrapping common error handling patterns into reusable components.
read morePosts
Running Flink in Production
This is a great watch for those beginning their journey with Flink.
read morePosts
What are the difference between Apache Beam and Apache Flink?
Apache Beam and Apache Flink are both distributed computing frameworks for processing large amounts of data in parallel, but they have some fundamental differences in their design and functionality.
Apache Beam is a unified programming model for batch and streaming data processing, which provides a high-level API that allows developers to write data processing pipelines that can run on various execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow.
read moreTag: Java
Posts
Simplify Error Handling In Apache Beam With Asgarde
As a data engineer, you’re likely familiar with the challenges of error handling in Apache Beam Java applications. Traditional approaches can lead to verbose code, making it difficult to read and maintain. The Asgarde library offers a solution by providing a way to write less code and produce more concise and expressive code.
What is Asgarde?
Asgarde is an open-source library that simplifies error handling in Apache Beam Java applications. It accomplishes this by wrapping common error handling patterns into reusable components.
read morePosts
Java 20 Features
Java 20 was released on March 21, 2023. It is a short-term release supported for six months, following the September 20 release of JDK 19. JDK 21, due in September, will be a long-term support (LTS) release, backed by multiple years of support.
New features in Java 20:
Record Patterns (Second Preview): This feature enhances the Java programming language with record patterns to deconstruct record values. Record patterns and type patterns can be nested to enable a powerful, declarative, and composable form of data navigation and processing.
read morePosts
Java VMs
OpenJDK is the reference implementation of the Java Virtual Machine (JVM). It is free and open-source, and is available for a variety of platforms. Oracle JDK is a commercial implementation of the JVM. It is developed and supported by Oracle, and includes additional features and performance optimizations over OpenJDK. GraalVM is a high-performance JVM that is designed for modern applications. It includes a number of features that can improve the performance of Java applications, such as ahead-of-time (AOT) compilation and native image generation.
read morePosts
Java Bytebuffers
There are several reasons why you should use Java ByteBuffers:
Efficiency: ByteBuffers are very efficient for data manipulation and I/O operations. They can be used to read and write data directly to and from memory, without the need to copy the data to and from an intermediate buffer. This can significantly improve performance, especially for large data sets. Flexibility: ByteBuffers are very flexible and can be used to represent a wide variety of data types, including integers, floats, strings, and even binary data.
read morePosts
Chronicle Queue and Aeron
Chronicle Queue and Aeron are both high-performance messaging systems, but they have different strengths and weaknesses.
Chronicle Queue is designed for low latency and high throughput messaging within a single machine or cluster. It uses a shared memory ring buffer to store messages, which can achieve very low latency (<1 microsecond) for messages that are sent and received on the same machine. Chronicle Queue also supports persistence, so messages can be written to disk and recovered in the event of a crash.
read morePosts
Java 17 Features
Pseudo-Random Number Generators (PRNGs) are getting a major update in Java with the release of JEP 356. New interfaces and implementations make it easier to use different algorithms interchangeably and offer better support for stream-based programming. This is a great improvement for Java developers who require randomness in their applications. The JDK is constantly evolving and improving, and part of that process is ensuring that internal APIs are properly encapsulated. JEP 403 represents a step in that direction, by removing the –illegal-access flag.
read morePosts
Calling Native Libraries from Java
A couple of options I’ve used and seen: Java Native interface - watch out for segfaults! Project Panama - early access GraalVM - still really new. zt-exec - call the native library as an external process remotetea - an old favourite if it’s legacy C++ code
read morePosts
Latency Sensitive Microservices
Great talk by by Peter Lawrey regarding latency in micro-services. https://www.infoq.com/presentations/latency-sensitive-microservices/
read morePosts
How to get GXT explorer running in Eclipse
Download the latest jars from http://www.sencha.com/products/extgwt/download/ Follow the “setup.txt” create a eclipse project. Add all the folders in samples/**/src as source folder Expand the samples/examples.war into the “war” folder in your eclipse dir Delete the old “gxt.jar” from the WEB-INF/lib and replace with the gxt-2.2.3-gwt22.jar (seems like an old version is bundled with the samples.war) Run As, Web app, explorer.html, cross fingers…
read morePosts
Coherence in the real world
This has really helped in the Coherence projects I’ve been working on. Nothing quite like real world experience. More resources here.
read moreTag: Ai
Posts
Run AI on Your PC: Unleash the Power of Large Language Models (LLMs) Locally
Large language models (LLMs) have become synonymous with cutting-edge AI, capable of generating realistic text, translating languages, and writing different kinds of creative content. But what if you could leverage this power on your own machine, with complete privacy and control?
Running LLMs locally might seem daunting, but it’s becoming increasingly accessible. Here’s a breakdown of why you might consider it, and how it’s easier than you think:
The Allure of Local LLMs
read morePosts
Artificial Intelligence and Carbon Emissions
Artificial intelligence (AI) is rapidly transforming our world, but it comes with a hidden cost: carbon emissions.
According to a recent study by the Allen Institute for AI, training a single large language model can produce up to 550 tons of carbon dioxide, equivalent to the emissions of five cars over their lifetime.
This is because AI training requires massive amounts of computing power, which in turn relies on electricity generated by fossil fuels.
read more about carbon emissionsPosts
BigQuery ML Example
Here is an example of how to use BigQuery ML on a public dataset to create a logistic regression model to predict whether a user will click on an ad:
# Import the BigQuery ML library from google.cloud import bigquery from google.cloud.bigquery import Model # Get the dataset and table dataset = bigquery.Dataset("bigquery-public-data.samples.churn") table = dataset.table("churn") # Create a model model = Model('my_model', model_type='logistic_regression', input_label_column='churn', input_features_columns=['tenure', 'contract', 'monthly_charges']) # Train the model model.
read morePosts
BigQuery ML and Vertex AI Generative AI
BigQuery ML and Vertex AI Generative AI (GenAI) are both machine learning (ML) services that can be used to build and deploy ML models. However, there are some key differences between the two services.
BigQuery ML: BigQuery ML is a fully managed ML service that allows you to build and deploy ML models without having to manage any infrastructure. BigQuery ML uses the same machine learning algorithms as Vertex AI, but it does not offer the same level of flexibility or control.
read moreTag: Artificial-Intelligence
Posts
Run AI on Your PC: Unleash the Power of Large Language Models (LLMs) Locally
Large language models (LLMs) have become synonymous with cutting-edge AI, capable of generating realistic text, translating languages, and writing different kinds of creative content. But what if you could leverage this power on your own machine, with complete privacy and control?
Running LLMs locally might seem daunting, but it’s becoming increasingly accessible. Here’s a breakdown of why you might consider it, and how it’s easier than you think:
The Allure of Local LLMs
read morePosts
Artificial Intelligence and Carbon Emissions
Artificial intelligence (AI) is rapidly transforming our world, but it comes with a hidden cost: carbon emissions.
According to a recent study by the Allen Institute for AI, training a single large language model can produce up to 550 tons of carbon dioxide, equivalent to the emissions of five cars over their lifetime.
This is because AI training requires massive amounts of computing power, which in turn relies on electricity generated by fossil fuels.
read more about carbon emissionsTag: Dev
Posts
Run AI on Your PC: Unleash the Power of Large Language Models (LLMs) Locally
Large language models (LLMs) have become synonymous with cutting-edge AI, capable of generating realistic text, translating languages, and writing different kinds of creative content. But what if you could leverage this power on your own machine, with complete privacy and control?
Running LLMs locally might seem daunting, but it’s becoming increasingly accessible. Here’s a breakdown of why you might consider it, and how it’s easier than you think:
The Allure of Local LLMs
read morePosts
Why Intel and AMD do not make chips like the M2
Here is a comparison of the Apple M2, AMD Ryzen 9 5950X, and AMD Ryzen 7950X:
CPU Cores Threads Base clock Boost clock L3 cache Manufacturing process Apple M2 8 8 3.2 GHz 3.7 GHz 16 MB 5nm AMD Ryzen 9 5950X 16 32 3.4 GHz 4.9 GHz 64 MB 7nm AMD Ryzen 7950X 16 32 4.5 GHz 5.7 GHz 96 MB 5nm As you can see, the Ryzen 9 7950X has the most cores, threads, and cache of the three CPUs.
learn more about the reasonsPosts
Scaling rust builds with Bazel
Rust is a popular programming language due to its speed, safety, and memory efficiency. However, it can be challenging to scale Rust builds, especially for large projects with many dependencies.
Bazel is a build system that can help you scale your Rust builds. It is a powerful tool with many features, including:
Parallelism: Bazel can build your code in parallel, which can significantly speed up your builds. Caching: Bazel caches the results of previous builds, so it only needs to rebuild the parts of your code that have changed.
build Rust with BazelPosts
Python is getting ready to lose its GIL
Python is getting ready to lose its GIL
The Python Global Interpreter Lock (GIL) is a mechanism that prevents multiple threads from executing Python code at the same time. This has been a source of frustration for some Python users, as it can limit the performance of applications that need to use multiple cores.
PEP 703 proposes a solution to this problem by making the Python interpreter thread-safe and removing the GIL.
read morePosts
Raspberry Pi/Raspbian - chromium/chromedriver crash after upgrade to 99.0.4844.51
Upgraded to chromedriver 99.0.4844.51 on Raspbian(bullseye) and seeing this in your chromedriver.log?
[0312/111354.689372:ERROR:egl_util.cc(74)] Failed to load GLES library: /usr/lib/chromium-browser/libGLESv2.so: /usr/lib/chromium-browser/libGLESv2.so: cannot open shared object file: No such file or directory [0312/111354.709636:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization [0312/111354.735541:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is disabled, ANGLE is
Add “–disable-gpu” as an option when setting up the browser. e.g. for selenium/java:
ChromeOptions options = new ChromeOptions() options.addArguments("–disable-gpu")
It looks like the behaviour has changed as this “shouldn’t” be required.
read morePosts
DevSecOps vs SRE
DevSecOps and SRE are two complementary approaches to ensuring the reliability and security of software systems.
DevSecOps is a practice that integrates security into the entire software development lifecycle (SDLC). This means that security is considered from the very beginning of the development process, and it is not an afterthought. DevSecOps teams work closely with development, operations, and security teams to ensure that security is built into the code from the start.
read morePosts
How to get GXT explorer running in Eclipse
Download the latest jars from http://www.sencha.com/products/extgwt/download/ Follow the “setup.txt” create a eclipse project. Add all the folders in samples/**/src as source folder Expand the samples/examples.war into the “war” folder in your eclipse dir Delete the old “gxt.jar” from the WEB-INF/lib and replace with the gxt-2.2.3-gwt22.jar (seems like an old version is bundled with the samples.war) Run As, Web app, explorer.html, cross fingers…
read moreTag: Llm
Posts
Run AI on Your PC: Unleash the Power of Large Language Models (LLMs) Locally
Large language models (LLMs) have become synonymous with cutting-edge AI, capable of generating realistic text, translating languages, and writing different kinds of creative content. But what if you could leverage this power on your own machine, with complete privacy and control?
Running LLMs locally might seem daunting, but it’s becoming increasingly accessible. Here’s a breakdown of why you might consider it, and how it’s easier than you think:
The Allure of Local LLMs
read moreTag: Kafka
Posts
Modern Data Engineering: Essential Skills for Real-Time Data Platforms
In today’s data-driven world, organizations require real-time insights gleaned from high-velocity data streams. This necessitates a skilled data engineering team equipped with the latest technologies and expertise. This blog post explores the crucial skillsets sought after in data engineers who will design, develop, implement, and support cutting-edge real-time data platforms.
Mastering Streaming Architectures: Kafka, Kafka Connect, and Beyond
At the core of real-time data pipelines lies the ability to ingest and process data in motion.
read morePosts
Kafka Connect in 2024
There are several alternatives to Kafka Connect, each with its own strengths and weaknesses depending on your specific needs. Here’s a breakdown of some popular options:
1. Stream Processing Frameworks:
Apache Flink: A powerful open-source stream processing framework that can be used to build data pipelines with custom logic for data transformation and enrichment. Flink natively integrates with Kafka and can be used as an alternative to Kafka Connect for complex processing needs.
read morePosts
confluent kafka vs apache beam
Confluent Kafka and Apache Beam are both open-source platforms for streaming data. However, they have different strengths and weaknesses.
Confluent Kafka is a distributed streaming platform that is used to store and process large amounts of data in real time. It is a good choice for applications that require high throughput and low latency. Kafka is also a good choice for applications that need to be fault-tolerant and scalable.
Apache Beam is a unified programming model for batch and streaming data processing.
read moreTag: Messaging
Posts
Modern Data Engineering: Essential Skills for Real-Time Data Platforms
In today’s data-driven world, organizations require real-time insights gleaned from high-velocity data streams. This necessitates a skilled data engineering team equipped with the latest technologies and expertise. This blog post explores the crucial skillsets sought after in data engineers who will design, develop, implement, and support cutting-edge real-time data platforms.
Mastering Streaming Architectures: Kafka, Kafka Connect, and Beyond
At the core of real-time data pipelines lies the ability to ingest and process data in motion.
read morePosts
Kafka Connect in 2024
There are several alternatives to Kafka Connect, each with its own strengths and weaknesses depending on your specific needs. Here’s a breakdown of some popular options:
1. Stream Processing Frameworks:
Apache Flink: A powerful open-source stream processing framework that can be used to build data pipelines with custom logic for data transformation and enrichment. Flink natively integrates with Kafka and can be used as an alternative to Kafka Connect for complex processing needs.
read morePosts
confluent kafka vs apache beam
Confluent Kafka and Apache Beam are both open-source platforms for streaming data. However, they have different strengths and weaknesses.
Confluent Kafka is a distributed streaming platform that is used to store and process large amounts of data in real time. It is a good choice for applications that require high throughput and low latency. Kafka is also a good choice for applications that need to be fault-tolerant and scalable.
Apache Beam is a unified programming model for batch and streaming data processing.
read moreTag: Pubsub
Posts
Modern Data Engineering: Essential Skills for Real-Time Data Platforms
In today’s data-driven world, organizations require real-time insights gleaned from high-velocity data streams. This necessitates a skilled data engineering team equipped with the latest technologies and expertise. This blog post explores the crucial skillsets sought after in data engineers who will design, develop, implement, and support cutting-edge real-time data platforms.
Mastering Streaming Architectures: Kafka, Kafka Connect, and Beyond
At the core of real-time data pipelines lies the ability to ingest and process data in motion.
read morePosts
Kafka Connect in 2024
There are several alternatives to Kafka Connect, each with its own strengths and weaknesses depending on your specific needs. Here’s a breakdown of some popular options:
1. Stream Processing Frameworks:
Apache Flink: A powerful open-source stream processing framework that can be used to build data pipelines with custom logic for data transformation and enrichment. Flink natively integrates with Kafka and can be used as an alternative to Kafka Connect for complex processing needs.
read morePosts
confluent kafka vs apache beam
Confluent Kafka and Apache Beam are both open-source platforms for streaming data. However, they have different strengths and weaknesses.
Confluent Kafka is a distributed streaming platform that is used to store and process large amounts of data in real time. It is a good choice for applications that require high throughput and low latency. Kafka is also a good choice for applications that need to be fault-tolerant and scalable.
Apache Beam is a unified programming model for batch and streaming data processing.
read moreTag: Risk
Posts
Risk Calculations and Aggregation
Settlement risk, the potential for a counterparty to default on their obligations on a trade settlement date, is a constant concern in the financial world. Traditionally, calculating and managing this risk has been a complex and siloed process, often residing within the confines of the back office. However, the rise of sophisticated in-house front-office platforms presents an opportunity to proactively address settlement risk and gain a holistic view of the entire trading lifecycle.
read morePosts
How to Mitigate Intraday Settlement Risk
Navigating the Rapids: How to Mitigate Intraday Settlement Risk In the fast-paced world of finance, even minor hiccups can have significant consequences. One such risk, intraday settlement risk, poses a constant challenge for banks and financial institutions. But what exactly is it, and how can institutions effectively manage this risk?
Understanding Intraday Settlement Risk
Intraday settlement risk refers to the potential inability to meet payment obligations at the expected time within a single business day.
read morePosts
Risk and tribal language / Counterparty Credit Risk
Whenever you start on a new project there’s always a certain amount of tribal language to decode. A colleague of mine kept talking about a system that “calculates IRC”. When I asked what IRC was, he didn’t know….
Here’s the best reference I found for developers looking to understand Counterparty Credit Risk.
read morePosts
Delta risk
QuantLib is a free and open-source software library for quantitative finance. It provides a wide range of functionality for pricing and risk-managing financial derivatives, including interest rate swaps.
To calculate the delta risk of an interest rate swap in Python using QuantLib, you can follow these steps:
Import the necessary QuantLib modules: Python
import QuantLib as ql Create a QuantLib YieldTermStructure object to represent the current interest rate curve: Python
read moreTag: Gcp
Posts
Securing Your Google Kubernetes Engine Clusters from a Critical Vulnerability
Google Kubernetes Engine (GKE) is a popular container orchestration platform that allows developers to deploy and manage containerized applications at scale. However, a recent security vulnerability has been discovered in GKE that could allow attackers to gain access to clusters and steal data or launch denial-of-service attacks.
The vulnerability is caused by a misunderstanding about the system:authenticated group, which includes any Google account with a valid login. This group can be assigned overly permissive roles, such as cluster-admin, which gives attackers full control over a GKE cluster.
read morePosts
Google Cloud Run vs AWS App Runner
AWS App Runner and Google Cloud Run are two serverless computing platforms that can help you deploy and run containerized applications without having to worry about servers. Both platforms are relatively new, but they have quickly become popular choices for developers.
What are the similarities?
Both platforms are serverless, meaning that you don’t have to provision or manage servers. The platforms will automatically scale your application up or down based on demand, so you only pay for the resources that you use.
read morePosts
GCP and Azure networking
Azure networking and GCP networking are both comprehensive cloud networking services that offer a wide range of features and capabilities. However, there are some key differences between the two platforms.
Azure networking offers a more traditional networking model, with a focus on virtual networks (VNets), subnets, and network security groups (NSGs). VNets are isolated networks that can be used to group together resources, such as virtual machines (VMs), storage, and applications.
read morePosts
Reverse engineering an existing GCP project with terraformer
It can be tough to try to reverse engineer an existing project that has never used terraform. Terraformer can look at an existing project and generate the corresponding terraform code for you. I tried it out on an existing legacy project which used Google Cloud Storage, BigQuery and various service accounts. The setup was a little tricky so I put together a script to simply things. The script assumes you have gcloud setup or a service account key/impersonation and you may need to adjust the –resources parameter.
read morePosts
Undelete bigquery table
One hour ago:
bq cp mydataset.table@-3600000 mydataset.table_restored
Absolute (ms since UNIX epoch) GMT: Wednesday, 26 May 2021 13:41:53 = 1622036513000 https://www.epochconverter.com/
bq cp mydataset.table@1622036513000 mydataset.table_restored
More on Bigquery time travel
read morePosts
Confluent Cloud Kafka vs Google Cloud Pubsub Feature compare 2020
Feature Confluent Cloud Kafka Google Cloud Pubsub Notes Data Retention Set retention per topic in Confluent Cloud, including unlimited retention with log compaction. Retains unacknowledged messages in persistent storage for 7 days from the moment of publication. There is no limit on the number of retained messages. Have to write custom subscriber/publisher to save beyond 7 days [L] + ongoing BAU [S] Replay A consumer request an “offset”, however the retention period is dictated by the broker config “Snapshots” can be created for later replay by these are limited to 7 days as per retention policy.
read morePosts
dataflow real time + aggregate
A great way to split up your pipeline based on the urgency of results aggregate-data-with-dataflow
read morePosts
Google Cloud IAM Madness
After the recent GCP outage related to IAM, I found some odd behaviour with gsutil/gcloud. A script that had faithfully run for many months stopped working with:
ServiceException: 401 Anonymous caller does not have storage.buckets.list access to project xxxx
I tried recreating the service account key used for the operation with no luck. To fix the problem, I had to create a new bucket!
gsutil mb -b on -l us-east1 gs://my-awesome-bucket123ed321/
read morePosts
Cloud Billing Budget API in beta
You can finally set budgets via the API in GCP. This is a huge relief to all those org admins out there who have had to do this manually.
AND, hold on to your hats, there’s terraform support as well! Looks like Christmas came late….
data "google_billing_account" "account" { provider = google-beta billing_account = "000000-0000000-0000000-000000" } resource "google_billing_budget" "budget" { provider = google-beta billing_account = data.google_billing_account.account.id display_name = "Example Billing Budget" amount { specified_amount { currency_code = "USD" units = "100000" } } threshold_rules { threshold_percent = 0.
read morePosts
Managing GCP projects with Terraform
An invaluable start on how to start managing GCP projects with Terraform. I wish I’d found this a year ago.
read morePosts
Terraform init in the real world
Rather than fully configuring your backend.tf in a file.```
terraform {
backend “gcs” {
bucket = “my-bucket-123”
prefix = “terraform/state”
}
I prefer to use the command line in order avoid polluting the code with any environment specific names.
terraform init \ -backend-config=“bucket=my-bucket-123” \ -backend-config=“prefix=terraform/state
read morePosts
Opinionated Google Cloud Platform projects
I’m glad Google are finally starting to embrace Terraform by creating their own modules. Version 0.1.0 of the project-factory looks really promising.
read morePosts
Taming the stragglers in Google Cloud Dataflow
I’m currently bench-marking Flink against Google Cloud Dataflow using the same Apache Beam pipeline for quantitative analytics. One observation I’ve seen with Flink is the tail latency associated with some shards. Google Cloud Dataflow can optimise away stragglers in large jobs using “Dynamic Workload Rebalancing". As far as I know, Flink is currently unable to perform similar optimisations.
read moreTag: Gotchas
Posts
Securing Your Google Kubernetes Engine Clusters from a Critical Vulnerability
Google Kubernetes Engine (GKE) is a popular container orchestration platform that allows developers to deploy and manage containerized applications at scale. However, a recent security vulnerability has been discovered in GKE that could allow attackers to gain access to clusters and steal data or launch denial-of-service attacks.
The vulnerability is caused by a misunderstanding about the system:authenticated group, which includes any Google account with a valid login. This group can be assigned overly permissive roles, such as cluster-admin, which gives attackers full control over a GKE cluster.
read morePosts
AWS Lambda and GCP Cloud
AWS Lambda and Google Cloud Run are both serverless computing platforms that allow you to run code without provisioning or managing servers. However, there are some key differences between the two platforms:
Supported languages: AWS Lambda supports a wide range of programming languages including Node.js, Java, Python, Go, Ruby, and C#. Cloud Run supports Docker images, which can be written in any language. Cold start: When a Lambda function is first invoked, it takes a few milliseconds to start up.
read morePosts
Cloud gotchas 2
Serverless Serverless is great. You create your services and hand them over to AWS Lambda/GCP Cloud Run/Azure Functions and let them rip. Your system can scale up to hundreds of instances and quickly service your clients. However, you must consider
how will your downstream clients respond to such peaks in volume? Will they be able to cope? how must will auto-scaling cost? how portable is your code between serverless platforms? how will you handle bugs in the serverless platform?
read morePosts
Cloud gotchas 1
Since 2017 I’ve been involved in a wide variety of “cloud” projects and there’s some common myths I’ve observed.
Migrations are just containers Change is hard and unless you’re working for a startup, most cloud transformations start as lift and shift exercises. Contracts have been signed and everyone has been sold the myth that all you need to do is “dockerise” your containers and away you go.
Unfortunately, most of the hyperscalers (cloud provider - GCP, AWS, Azure, etc) will dazzle you with the way they’ve been doing things for years and just tell you and will instruct you to “do as they say”.
read moreTag: Security
Posts
Securing Your Google Kubernetes Engine Clusters from a Critical Vulnerability
Google Kubernetes Engine (GKE) is a popular container orchestration platform that allows developers to deploy and manage containerized applications at scale. However, a recent security vulnerability has been discovered in GKE that could allow attackers to gain access to clusters and steal data or launch denial-of-service attacks.
The vulnerability is caused by a misunderstanding about the system:authenticated group, which includes any Google account with a valid login. This group can be assigned overly permissive roles, such as cluster-admin, which gives attackers full control over a GKE cluster.
read moreTag: Aws
Posts
AWS Fargate vs. non-Fargate
Fargate vs. Non-Fargate: Choosing the Right Container Orchestration Strategy for Your Needs
In the age of cloud computing, containers have become the go-to solution for deploying and scaling applications. And when it comes to container orchestration on AWS, the two main options are Fargate and non-Fargate (which typically involves Amazon EC2 instances and Amazon ECS). But which one is right for you?
What is Fargate?
Fargate is a serverless compute engine for Amazon ECS that allows you to run containers without having to provision or manage underlying EC2 instances.
more detailsTag: Serverless
Posts
AWS Fargate vs. non-Fargate
Fargate vs. Non-Fargate: Choosing the Right Container Orchestration Strategy for Your Needs
In the age of cloud computing, containers have become the go-to solution for deploying and scaling applications. And when it comes to container orchestration on AWS, the two main options are Fargate and non-Fargate (which typically involves Amazon EC2 instances and Amazon ECS). But which one is right for you?
What is Fargate?
Fargate is a serverless compute engine for Amazon ECS that allows you to run containers without having to provision or manage underlying EC2 instances.
more detailsPosts
AWS Lambda and GCP Cloud
AWS Lambda and Google Cloud Run are both serverless computing platforms that allow you to run code without provisioning or managing servers. However, there are some key differences between the two platforms:
Supported languages: AWS Lambda supports a wide range of programming languages including Node.js, Java, Python, Go, Ruby, and C#. Cloud Run supports Docker images, which can be written in any language. Cold start: When a Lambda function is first invoked, it takes a few milliseconds to start up.
read moreTag: Environment
Posts
Artificial Intelligence and Carbon Emissions
Artificial intelligence (AI) is rapidly transforming our world, but it comes with a hidden cost: carbon emissions.
According to a recent study by the Allen Institute for AI, training a single large language model can produce up to 550 tons of carbon dioxide, equivalent to the emissions of five cars over their lifetime.
This is because AI training requires massive amounts of computing power, which in turn relies on electricity generated by fossil fuels.
read more about carbon emissionsTag: Performance
Posts
Why Intel and AMD do not make chips like the M2
Here is a comparison of the Apple M2, AMD Ryzen 9 5950X, and AMD Ryzen 7950X:
CPU Cores Threads Base clock Boost clock L3 cache Manufacturing process Apple M2 8 8 3.2 GHz 3.7 GHz 16 MB 5nm AMD Ryzen 9 5950X 16 32 3.4 GHz 4.9 GHz 64 MB 7nm AMD Ryzen 7950X 16 32 4.5 GHz 5.7 GHz 96 MB 5nm As you can see, the Ryzen 9 7950X has the most cores, threads, and cache of the three CPUs.
learn more about the reasonsPosts
Python is getting ready to lose its GIL
Python is getting ready to lose its GIL
The Python Global Interpreter Lock (GIL) is a mechanism that prevents multiple threads from executing Python code at the same time. This has been a source of frustration for some Python users, as it can limit the performance of applications that need to use multiple cores.
PEP 703 proposes a solution to this problem by making the Python interpreter thread-safe and removing the GIL.
read moreTag: Build
Posts
Scaling rust builds with Bazel
Rust is a popular programming language due to its speed, safety, and memory efficiency. However, it can be challenging to scale Rust builds, especially for large projects with many dependencies.
Bazel is a build system that can help you scale your Rust builds. It is a powerful tool with many features, including:
Parallelism: Bazel can build your code in parallel, which can significantly speed up your builds. Caching: Bazel caches the results of previous builds, so it only needs to rebuild the parts of your code that have changed.
build Rust with BazelTag: Rust
Posts
Scaling rust builds with Bazel
Rust is a popular programming language due to its speed, safety, and memory efficiency. However, it can be challenging to scale Rust builds, especially for large projects with many dependencies.
Bazel is a build system that can help you scale your Rust builds. It is a powerful tool with many features, including:
Parallelism: Bazel can build your code in parallel, which can significantly speed up your builds. Caching: Bazel caches the results of previous builds, so it only needs to rebuild the parts of your code that have changed.
build Rust with BazelPosts
Beyond Bash: Exploring Modern Rust-based Command-Line Utilities
Rust has emerged as a powerhouse for building robust, lightning-fast, and secure software. Its influence extends beyond web applications and systems programming—it’s also gaining traction in the realm of command-line utilities.
Here’s a curated list of Rust-powered command-line tools that can streamline your workflow, enhance productivity, and make your life in the terminal a breeze:
Text Manipulation bat : A “cat” clone with wings, featuring syntax highlighting, Git integration, and automatic paging for seamless viewing of text files.
more detailsTag: Python
Posts
Python is getting ready to lose its GIL
Python is getting ready to lose its GIL
The Python Global Interpreter Lock (GIL) is a mechanism that prevents multiple threads from executing Python code at the same time. This has been a source of frustration for some Python users, as it can limit the performance of applications that need to use multiple cores.
PEP 703 proposes a solution to this problem by making the Python interpreter thread-safe and removing the GIL.
read moreTag: Azure
Posts
Google Cloud Run vs AWS App Runner
AWS App Runner and Google Cloud Run are two serverless computing platforms that can help you deploy and run containerized applications without having to worry about servers. Both platforms are relatively new, but they have quickly become popular choices for developers.
What are the similarities?
Both platforms are serverless, meaning that you don’t have to provision or manage servers. The platforms will automatically scale your application up or down based on demand, so you only pay for the resources that you use.
read morePosts
Google Cloud Dataflow and Azure Stream Analytics
Google Cloud Dataflow and Azure Stream Analytics are both cloud-based streaming data processing services. They offer similar features, but there are some key differences between the two platforms.
Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. It is designed to scale automatically based on the data processing needs. Dataflow also offers various security features including IAM (Identity and Access Management), encryption, and audit logging.
read morePosts
GCP and Azure networking
Azure networking and GCP networking are both comprehensive cloud networking services that offer a wide range of features and capabilities. However, there are some key differences between the two platforms.
Azure networking offers a more traditional networking model, with a focus on virtual networks (VNets), subnets, and network security groups (NSGs). VNets are isolated networks that can be used to group together resources, such as virtual machines (VMs), storage, and applications.
read morePosts
Monitor Costs in Azure
There are a few ways to monitor costs in Azure. One way is to use the Azure Cost Management + Billing portal. This portal provides a graphical interface that you can use to view your costs over time, track your spending against budgets, and identify areas where you can save money.
Another way to monitor costs is to use the Azure Cost Management API. This API allows you to programmatically access your cost data and integrate it with other systems.
read morePosts
Azure create K8 cluster
Here is a Terraform file that you can use to create a Kubernetes cluster in Azure:
provider "azurerm" { version = "~> 3.70.0" subscription_id = var.azure_subscription_id client_id = var.azure_client_id client_secret = var.azure_client_secret tenant_id = var.azure_tenant_id } resource "azurerm_resource_group" "aks_cluster" { name = var.resource_group_name location = var.location } resource "azurerm_kubernetes_cluster" "aks_cluster" { name = var.aks_cluster_name location = azurerm_resource_group.aks_cluster.location resource_group_name = azurerm_resource_group.aks_cluster.name node_count = 3 vm_size = "Standard_D2s_v3" network_profile { kubernetes_network_interface_id = azurerm_network_interface.
read moreTag: Project
Posts
Google Cloud Run vs AWS App Runner
AWS App Runner and Google Cloud Run are two serverless computing platforms that can help you deploy and run containerized applications without having to worry about servers. Both platforms are relatively new, but they have quickly become popular choices for developers.
What are the similarities?
Both platforms are serverless, meaning that you don’t have to provision or manage servers. The platforms will automatically scale your application up or down based on demand, so you only pay for the resources that you use.
read morePosts
GCP and Azure networking
Azure networking and GCP networking are both comprehensive cloud networking services that offer a wide range of features and capabilities. However, there are some key differences between the two platforms.
Azure networking offers a more traditional networking model, with a focus on virtual networks (VNets), subnets, and network security groups (NSGs). VNets are isolated networks that can be used to group together resources, such as virtual machines (VMs), storage, and applications.
read morePosts
Monitor Costs in Azure
There are a few ways to monitor costs in Azure. One way is to use the Azure Cost Management + Billing portal. This portal provides a graphical interface that you can use to view your costs over time, track your spending against budgets, and identify areas where you can save money.
Another way to monitor costs is to use the Azure Cost Management API. This API allows you to programmatically access your cost data and integrate it with other systems.
read morePosts
Azure create K8 cluster
Here is a Terraform file that you can use to create a Kubernetes cluster in Azure:
provider "azurerm" { version = "~> 3.70.0" subscription_id = var.azure_subscription_id client_id = var.azure_client_id client_secret = var.azure_client_secret tenant_id = var.azure_tenant_id } resource "azurerm_resource_group" "aks_cluster" { name = var.resource_group_name location = var.location } resource "azurerm_kubernetes_cluster" "aks_cluster" { name = var.aks_cluster_name location = azurerm_resource_group.aks_cluster.location resource_group_name = azurerm_resource_group.aks_cluster.name node_count = 3 vm_size = "Standard_D2s_v3" network_profile { kubernetes_network_interface_id = azurerm_network_interface.
read morePosts
How to create an effective SRE culture
Here are some tips on how to create an effective SRE culture:
Start with the right mindset. SRE is a mindset that sees reliability as everyone’s responsibility, not just the responsibility of the SRE team. It is important to create a culture where everyone is empowered to take ownership of reliability and to make decisions that will improve the reliability of the systems they work on. Embrace failure. Failure is inevitable, so it is important to create a culture where failure is seen as an opportunity to learn and improve.
read morePosts
Pushing the limits of the Google Cloud Platform
This one is better explained with the presentation below. If you want to learn how to run quantitative analytics at scale, it’s well worth a watch.
read morePosts
Cash Equities: Order Management System
Built and maintained a client and market side booking service, off order book trade reporting engine and trade manager/repository
Like most banks, this one suffered from the not invented here syndrome. They had decided to pretty much reimplement the core libraries to optimize for Zero Garbage Collection, low latency and high throughput. Unfortunately they were not optimized for large development teams and maintainability.
I helped the team debug issues and introduce new functionality using a tech stack that consisted of Java, Groovy, Spring, FIX, JUnit, MSSQL and JRebel.
read moreProjects
Algo trading
I developed an eTrading platform routing client FIX flow to the firm’s Algorithmic Trading platform.
The platform was used by traders send orders to the market based on a particular strategy (e.g VWAP).
VWAP stands for Volume-Weighted Average Price. It is a technical analysis indicator that is used to measure the average price of a security over a given period of time, taking into account the volume of trades.
The VWAP is calculated by adding up the dollar value of all trades for a security and then dividing by the total volume of trades.
read morePosts
Iceberg and VWAP
Developed an eTrading platform routing client FIX flow to the firm’s Algorithmic Trading platform.
Used profiling/debugging tools to resolve critical issues around lost trade messages.
Java, YourKit, Swing, Spring, Tibco EMS, FIX
read morePosts
Money in, money out
Design and development of a real time matching engine to provide insights into liquidity for funding forecasts and regulatory requirements.
Java, BDD/TDD, JUnit/TestNG, Maven, Coherence, Spring Core, Spring Data, Spring Boot Micro Services, IBM MQ, Maven, Jira, TeamCity
read moreTag: Crypto
Posts
Binance troubles
Binance is being sued by a number of regulatory agencies and individuals for a variety of reasons, including:
Operating an unregistered securities exchange: The Securities and Exchange Commission (SEC) sued Binance in June 2023 for operating an unregistered securities exchange. The SEC alleged that Binance allowed US residents to trade unregistered securities, including tokens that were offered through initial coin offerings (ICOs). Market manipulation: The Commodity Futures Trading Commission (CFTC) also sued Binance in June 2023 for market manipulation.
read morePosts
Crypto liquidity
The liquidity of crypto markets is a measure of how easily you can buy or sell a cryptocurrency without affecting its price. A liquid market means that there are many buyers and sellers, so you can easily find someone to take the other side of your trade. An illiquid market means that there are few buyers and sellers, so it can be difficult to find someone to trade with and the price of the cryptocurrency may be more volatile.
read morePosts
Cryptofeed and XChange
Cryptofeed and XChange are both Python libraries that provide access to cryptocurrency exchange data. However, there are some key differences between the two projects.
Cryptofeed is more mature and has a wider range of supported exchanges. Cryptofeed currently supports over 40 exchanges, while XChange only supports a handful. Cryptofeed also has a more comprehensive set of features, including support for websockets, book validation, and multiple data formats. XChange is more lightweight and easier to use.
read morePosts
Crypto Market Makers
Here are some of the bigger crypto market makers:
Genesis Global Trading: Genesis Global Trading is a leading digital asset market maker, providing liquidity to institutions and professional traders around the world. BitMEX: BitMEX is a cryptocurrency exchange that offers margin trading and other derivatives products. It is one of the largest cryptocurrency exchanges in terms of trading volume. Binance: Binance is another large cryptocurrency exchange that offers a variety of trading products, including spot trading, margin trading, and futures trading.
read morePosts
Crypto - diy?
To create your own cryptocurrency, you will need to:
Create a blockchain. This is the underlying technology that will support your cryptocurrency. There are many different blockchain platforms available, such as Ethereum, Bitcoin, and EOS. Design your cryptocurrency. This includes deciding on the name, symbol, total supply, and distribution method. You will also need to create a mining algorithm. Create a wallet. This is where your cryptocurrency will be stored. There are many different wallets available, both hardware and software.
read morePosts
Crypto - why?
The point of cryptocurrency is to provide a decentralized, secure, and efficient way to transfer value. Cryptocurrencies are not issued by any central authority, such as a government or bank, and they are not backed by any physical asset. Instead, they are created and maintained by a network of computers that are running a special software program. This software program is designed to verify and record cryptocurrency transactions, and to prevent fraud.
read moreTag: ETrading
Posts
Binance troubles
Binance is being sued by a number of regulatory agencies and individuals for a variety of reasons, including:
Operating an unregistered securities exchange: The Securities and Exchange Commission (SEC) sued Binance in June 2023 for operating an unregistered securities exchange. The SEC alleged that Binance allowed US residents to trade unregistered securities, including tokens that were offered through initial coin offerings (ICOs). Market manipulation: The Commodity Futures Trading Commission (CFTC) also sued Binance in June 2023 for market manipulation.
read morePosts
Crypto liquidity
The liquidity of crypto markets is a measure of how easily you can buy or sell a cryptocurrency without affecting its price. A liquid market means that there are many buyers and sellers, so you can easily find someone to take the other side of your trade. An illiquid market means that there are few buyers and sellers, so it can be difficult to find someone to trade with and the price of the cryptocurrency may be more volatile.
read morePosts
Cryptofeed and XChange
Cryptofeed and XChange are both Python libraries that provide access to cryptocurrency exchange data. However, there are some key differences between the two projects.
Cryptofeed is more mature and has a wider range of supported exchanges. Cryptofeed currently supports over 40 exchanges, while XChange only supports a handful. Cryptofeed also has a more comprehensive set of features, including support for websockets, book validation, and multiple data formats. XChange is more lightweight and easier to use.
read morePosts
Crypto Market Makers
Here are some of the bigger crypto market makers:
Genesis Global Trading: Genesis Global Trading is a leading digital asset market maker, providing liquidity to institutions and professional traders around the world. BitMEX: BitMEX is a cryptocurrency exchange that offers margin trading and other derivatives products. It is one of the largest cryptocurrency exchanges in terms of trading volume. Binance: Binance is another large cryptocurrency exchange that offers a variety of trading products, including spot trading, margin trading, and futures trading.
read morePosts
Chronicle Queue and Aeron
Chronicle Queue and Aeron are both high-performance messaging systems, but they have different strengths and weaknesses.
Chronicle Queue is designed for low latency and high throughput messaging within a single machine or cluster. It uses a shared memory ring buffer to store messages, which can achieve very low latency (<1 microsecond) for messages that are sent and received on the same machine. Chronicle Queue also supports persistence, so messages can be written to disk and recovered in the event of a crash.
read morePosts
Crypto - diy?
To create your own cryptocurrency, you will need to:
Create a blockchain. This is the underlying technology that will support your cryptocurrency. There are many different blockchain platforms available, such as Ethereum, Bitcoin, and EOS. Design your cryptocurrency. This includes deciding on the name, symbol, total supply, and distribution method. You will also need to create a mining algorithm. Create a wallet. This is where your cryptocurrency will be stored. There are many different wallets available, both hardware and software.
read morePosts
Predict the stock market
The premise was simple. Use “big” data analytics and machine learning models to predict the movement of stock prices. However, we had really “dirty” data and our Data Scientists were stuggling to seperate the noise from the signals. We spent a lot of time cleaning the data and introducing good old principles like “how can I run the model somewhere over than a laptop?”. This was a true startup, a bunch of people in a room trying to get stuff working.
read morePosts
Crypto - why?
The point of cryptocurrency is to provide a decentralized, secure, and efficient way to transfer value. Cryptocurrencies are not issued by any central authority, such as a government or bank, and they are not backed by any physical asset. Instead, they are created and maintained by a network of computers that are running a special software program. This software program is designed to verify and record cryptocurrency transactions, and to prevent fraud.
read morePosts
Cash Equities: Order Management System
Built and maintained a client and market side booking service, off order book trade reporting engine and trade manager/repository
Like most banks, this one suffered from the not invented here syndrome. They had decided to pretty much reimplement the core libraries to optimize for Zero Garbage Collection, low latency and high throughput. Unfortunately they were not optimized for large development teams and maintainability.
I helped the team debug issues and introduce new functionality using a tech stack that consisted of Java, Groovy, Spring, FIX, JUnit, MSSQL and JRebel.
read morePosts
Iceberg and VWAP
Developed an eTrading platform routing client FIX flow to the firm’s Algorithmic Trading platform.
Used profiling/debugging tools to resolve critical issues around lost trade messages.
Java, YourKit, Swing, Spring, Tibco EMS, FIX
read morePosts
Blockchain - why?
The point of blockchain is to provide a secure and transparent way to store and track data. Blockchain is a distributed ledger technology that uses cryptography to secure and verify transactions. This means that data stored on the blockchain cannot be tampered with or altered without the consent of the network.
Here are some of the potential benefits of blockchain:
Security: Blockchain is very secure because it is very difficult to hack or tamper with.
read moreTag: Bash
Posts
Beyond Bash: Exploring Modern Rust-based Command-Line Utilities
Rust has emerged as a powerhouse for building robust, lightning-fast, and secure software. Its influence extends beyond web applications and systems programming—it’s also gaining traction in the realm of command-line utilities.
Here’s a curated list of Rust-powered command-line tools that can streamline your workflow, enhance productivity, and make your life in the terminal a breeze:
Text Manipulation bat : A “cat” clone with wings, featuring syntax highlighting, Git integration, and automatic paging for seamless viewing of text files.
more detailsTag: Mlops
Posts
Machine Learning Ops (MLOps)
MLOps stands for Machine Learning Operations. It is a set of practices that combines machine learning, DevOps, and IT operations to automate the end-to-end machine learning lifecycle, from data preparation to model deployment and monitoring.
The goal of MLOps is to make it easier to deploy and maintain machine learning models in production, while ensuring that they are reliable and efficient. MLOps can help to improve the quality of machine learning models, reduce the time it takes to get them into production, and make it easier to scale machine learning applications.
read morePosts
MLOps with Kubeflow
Kubeflow is an open-source platform for machine learning and MLOps on Kubernetes. It provides a set of tools and components that make it easy to deploy, manage, and scale machine learning workflows on Kubernetes.
Kubeflow includes a variety of components, including:
Notebooks: A Jupyter notebook service that allows data scientists to develop and experiment with machine learning models.
Pipelines: A tool for building and deploying machine learning pipelines.
Experimentation: A tool for tracking and managing machine learning experiments.
read moreTag: Bigquery
Posts
BigQuery ML Example
Here is an example of how to use BigQuery ML on a public dataset to create a logistic regression model to predict whether a user will click on an ad:
# Import the BigQuery ML library from google.cloud import bigquery from google.cloud.bigquery import Model # Get the dataset and table dataset = bigquery.Dataset("bigquery-public-data.samples.churn") table = dataset.table("churn") # Create a model model = Model('my_model', model_type='logistic_regression', input_label_column='churn', input_features_columns=['tenure', 'contract', 'monthly_charges']) # Train the model model.
read morePosts
BigQuery ML and Vertex AI Generative AI
BigQuery ML and Vertex AI Generative AI (GenAI) are both machine learning (ML) services that can be used to build and deploy ML models. However, there are some key differences between the two services.
BigQuery ML: BigQuery ML is a fully managed ML service that allows you to build and deploy ML models without having to manage any infrastructure. BigQuery ML uses the same machine learning algorithms as Vertex AI, but it does not offer the same level of flexibility or control.
read moreTag: Low Latency
Posts
Chronicle Queue and Aeron
Chronicle Queue and Aeron are both high-performance messaging systems, but they have different strengths and weaknesses.
Chronicle Queue is designed for low latency and high throughput messaging within a single machine or cluster. It uses a shared memory ring buffer to store messages, which can achieve very low latency (<1 microsecond) for messages that are sent and received on the same machine. Chronicle Queue also supports persistence, so messages can be written to disk and recovered in the event of a crash.
read moreTag: Kubeflow
Posts
MLOps with Kubeflow
Kubeflow is an open-source platform for machine learning and MLOps on Kubernetes. It provides a set of tools and components that make it easy to deploy, manage, and scale machine learning workflows on Kubernetes.
Kubeflow includes a variety of components, including:
Notebooks: A Jupyter notebook service that allows data scientists to develop and experiment with machine learning models.
Pipelines: A tool for building and deploying machine learning pipelines.
Experimentation: A tool for tracking and managing machine learning experiments.
read moreTag: Iac
Posts
Reverse engineering an existing GCP project with terraformer
It can be tough to try to reverse engineer an existing project that has never used terraform. Terraformer can look at an existing project and generate the corresponding terraform code for you. I tried it out on an existing legacy project which used Google Cloud Storage, BigQuery and various service accounts. The setup was a little tricky so I put together a script to simply things. The script assumes you have gcloud setup or a service account key/impersonation and you may need to adjust the –resources parameter.
read morePosts
Terraform init in the real world
Rather than fully configuring your backend.tf in a file.```
terraform {
backend “gcs” {
bucket = “my-bucket-123”
prefix = “terraform/state”
}
I prefer to use the command line in order avoid polluting the code with any environment specific names.
terraform init \ -backend-config=“bucket=my-bucket-123” \ -backend-config=“prefix=terraform/state
read moreTag: Technical-Debt
Posts
Reverse engineering an existing GCP project with terraformer
It can be tough to try to reverse engineer an existing project that has never used terraform. Terraformer can look at an existing project and generate the corresponding terraform code for you. I tried it out on an existing legacy project which used Google Cloud Storage, BigQuery and various service accounts. The setup was a little tricky so I put together a script to simply things. The script assumes you have gcloud setup or a service account key/impersonation and you may need to adjust the –resources parameter.
read morePosts
when technical debt becomes just debt
I’ve always hated the phrase “technical debt” as it can lead to items being banished to a backlog that are never addressed. For example, Knight Capital recently blamed a “technology issue” for a $440 million trading loss. Nanex speculate that this may have been due to someone inadvertently testing in production. Technical debt is really just debt that will be repaid in one way or another.
read moreTag: Terraform
Posts
Reverse engineering an existing GCP project with terraformer
It can be tough to try to reverse engineer an existing project that has never used terraform. Terraformer can look at an existing project and generate the corresponding terraform code for you. I tried it out on an existing legacy project which used Google Cloud Storage, BigQuery and various service accounts. The setup was a little tricky so I put together a script to simply things. The script assumes you have gcloud setup or a service account key/impersonation and you may need to adjust the –resources parameter.
read morePosts
Terraform Cloud Development Kit
Terraform’s Cloud Development Kit (CDK) let’s you use other languages to define your cloud infra.
https://github.com/hashicorp/terraform-cdk/blob/master/examples/
read morePosts
Cloud Billing Budget API in beta
You can finally set budgets via the API in GCP. This is a huge relief to all those org admins out there who have had to do this manually.
AND, hold on to your hats, there’s terraform support as well! Looks like Christmas came late….
data "google_billing_account" "account" { provider = google-beta billing_account = "000000-0000000-0000000-000000" } resource "google_billing_budget" "budget" { provider = google-beta billing_account = data.google_billing_account.account.id display_name = "Example Billing Budget" amount { specified_amount { currency_code = "USD" units = "100000" } } threshold_rules { threshold_percent = 0.
read morePosts
Managing GCP projects with Terraform
An invaluable start on how to start managing GCP projects with Terraform. I wish I’d found this a year ago.
read morePosts
Terraform init in the real world
Rather than fully configuring your backend.tf in a file.```
terraform {
backend “gcs” {
bucket = “my-bucket-123”
prefix = “terraform/state”
}
I prefer to use the command line in order avoid polluting the code with any environment specific names.
terraform init \ -backend-config=“bucket=my-bucket-123” \ -backend-config=“prefix=terraform/state
read morePosts
Opinionated Google Cloud Platform projects
I’m glad Google are finally starting to embrace Terraform by creating their own modules. Version 0.1.0 of the project-factory looks really promising.
read moreTag: Chromedriver
Posts
Raspberry Pi/Raspbian - chromium/chromedriver crash after upgrade to 99.0.4844.51
Upgraded to chromedriver 99.0.4844.51 on Raspbian(bullseye) and seeing this in your chromedriver.log?
[0312/111354.689372:ERROR:egl_util.cc(74)] Failed to load GLES library: /usr/lib/chromium-browser/libGLESv2.so: /usr/lib/chromium-browser/libGLESv2.so: cannot open shared object file: No such file or directory [0312/111354.709636:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization [0312/111354.735541:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is disabled, ANGLE is
Add “–disable-gpu” as an option when setting up the browser. e.g. for selenium/java:
ChromeOptions options = new ChromeOptions() options.addArguments("–disable-gpu")
It looks like the behaviour has changed as this “shouldn’t” be required.
read moreTag: Debug
Posts
Raspberry Pi/Raspbian - chromium/chromedriver crash after upgrade to 99.0.4844.51
Upgraded to chromedriver 99.0.4844.51 on Raspbian(bullseye) and seeing this in your chromedriver.log?
[0312/111354.689372:ERROR:egl_util.cc(74)] Failed to load GLES library: /usr/lib/chromium-browser/libGLESv2.so: /usr/lib/chromium-browser/libGLESv2.so: cannot open shared object file: No such file or directory [0312/111354.709636:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization [0312/111354.735541:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is disabled, ANGLE is
Add “–disable-gpu” as an option when setting up the browser. e.g. for selenium/java:
ChromeOptions options = new ChromeOptions() options.addArguments("–disable-gpu")
It looks like the behaviour has changed as this “shouldn’t” be required.
read moreTag: Raspbian
Posts
Raspberry Pi/Raspbian - chromium/chromedriver crash after upgrade to 99.0.4844.51
Upgraded to chromedriver 99.0.4844.51 on Raspbian(bullseye) and seeing this in your chromedriver.log?
[0312/111354.689372:ERROR:egl_util.cc(74)] Failed to load GLES library: /usr/lib/chromium-browser/libGLESv2.so: /usr/lib/chromium-browser/libGLESv2.so: cannot open shared object file: No such file or directory [0312/111354.709636:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization [0312/111354.735541:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is disabled, ANGLE is
Add “–disable-gpu” as an option when setting up the browser. e.g. for selenium/java:
ChromeOptions options = new ChromeOptions() options.addArguments("–disable-gpu")
It looks like the behaviour has changed as this “shouldn’t” be required.
read moreTag: Apache-Beam
Posts
dataflow real time + aggregate
A great way to split up your pipeline based on the urgency of results aggregate-data-with-dataflow
read morePosts
Taming the stragglers in Google Cloud Dataflow
I’m currently bench-marking Flink against Google Cloud Dataflow using the same Apache Beam pipeline for quantitative analytics. One observation I’ve seen with Flink is the tail latency associated with some shards. Google Cloud Dataflow can optimise away stragglers in large jobs using “Dynamic Workload Rebalancing". As far as I know, Flink is currently unable to perform similar optimisations.
read moreTag: Gcloud
Posts
Google Cloud IAM Madness
After the recent GCP outage related to IAM, I found some odd behaviour with gsutil/gcloud. A script that had faithfully run for many months stopped working with:
ServiceException: 401 Anonymous caller does not have storage.buckets.list access to project xxxx
I tried recreating the service account key used for the operation with no luck. To fix the problem, I had to create a new bucket!
gsutil mb -b on -l us-east1 gs://my-awesome-bucket123ed321/
read moreTag: Iam
Posts
Google Cloud IAM Madness
After the recent GCP outage related to IAM, I found some odd behaviour with gsutil/gcloud. A script that had faithfully run for many months stopped working with:
ServiceException: 401 Anonymous caller does not have storage.buckets.list access to project xxxx
I tried recreating the service account key used for the operation with no luck. To fix the problem, I had to create a new bucket!
gsutil mb -b on -l us-east1 gs://my-awesome-bucket123ed321/
read moreTag: Flink
Posts
Flink Kubernetes operators
How I wish these operators had existed a few years ago when I was setting up Flink…
https://github.com/GoogleCloudPlatform/flink-on-k8s-operator
https://www.ververica.com/blog/google-cloud-platforms-flink-operator-for-kubernetes
read morePosts
Running Flink in Production
This is a great watch for those beginning their journey with Flink.
read morePosts
Managing Flink Jobs
The DA Platform is a huge step forward for running Flink at scale. I was lucky enough to see a demo and was really impressed. Far more advanced that the what can be achieved with Dataflow at the moment.
read morePosts
Taming the stragglers in Google Cloud Dataflow
I’m currently bench-marking Flink against Google Cloud Dataflow using the same Apache Beam pipeline for quantitative analytics. One observation I’ve seen with Flink is the tail latency associated with some shards. Google Cloud Dataflow can optimise away stragglers in large jobs using “Dynamic Workload Rebalancing". As far as I know, Flink is currently unable to perform similar optimisations.
read morePosts
What are the difference between Apache Beam and Apache Flink?
Apache Beam and Apache Flink are both distributed computing frameworks for processing large amounts of data in parallel, but they have some fundamental differences in their design and functionality.
Apache Beam is a unified programming model for batch and streaming data processing, which provides a high-level API that allows developers to write data processing pipelines that can run on various execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow.
read moreTag: K8s
Posts
Flink Kubernetes operators
How I wish these operators had existed a few years ago when I was setting up Flink…
https://github.com/GoogleCloudPlatform/flink-on-k8s-operator
https://www.ververica.com/blog/google-cloud-platforms-flink-operator-for-kubernetes
read moreTag: Kubernetes
Posts
Flink Kubernetes operators
How I wish these operators had existed a few years ago when I was setting up Flink…
https://github.com/GoogleCloudPlatform/flink-on-k8s-operator
https://www.ververica.com/blog/google-cloud-platforms-flink-operator-for-kubernetes
read moreTag: Streaming
Posts
Flink Kubernetes operators
How I wish these operators had existed a few years ago when I was setting up Flink…
https://github.com/GoogleCloudPlatform/flink-on-k8s-operator
https://www.ververica.com/blog/google-cloud-platforms-flink-operator-for-kubernetes
read morePosts
Managing Flink Jobs
The DA Platform is a huge step forward for running Flink at scale. I was lucky enough to see a demo and was really impressed. Far more advanced that the what can be achieved with Dataflow at the moment.
read morePosts
Taming the stragglers in Google Cloud Dataflow
I’m currently bench-marking Flink against Google Cloud Dataflow using the same Apache Beam pipeline for quantitative analytics. One observation I’ve seen with Flink is the tail latency associated with some shards. Google Cloud Dataflow can optimise away stragglers in large jobs using “Dynamic Workload Rebalancing". As far as I know, Flink is currently unable to perform similar optimisations.
read morePosts
What are the difference between Apache Beam and Apache Flink?
Apache Beam and Apache Flink are both distributed computing frameworks for processing large amounts of data in parallel, but they have some fundamental differences in their design and functionality.
Apache Beam is a unified programming model for batch and streaming data processing, which provides a high-level API that allows developers to write data processing pipelines that can run on various execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow.
read moreTag: Java Jvm
Posts
Running Flink in Production
This is a great watch for those beginning their journey with Flink.
read morePosts
Great article about weak references in java
http://weblogs.java.net/blog/2006/05/04/understanding-weak-references
read moreTag: Prod
Posts
Running Flink in Production
This is a great watch for those beginning their journey with Flink.
read moreTag: 10x
Posts
How to create an effective SRE culture
Here are some tips on how to create an effective SRE culture:
Start with the right mindset. SRE is a mindset that sees reliability as everyone’s responsibility, not just the responsibility of the SRE team. It is important to create a culture where everyone is empowered to take ownership of reliability and to make decisions that will improve the reliability of the systems they work on. Embrace failure. Failure is inevitable, so it is important to create a culture where failure is seen as an opportunity to learn and improve.
read moreTag: Teams
Posts
How to create an effective SRE culture
Here are some tips on how to create an effective SRE culture:
Start with the right mindset. SRE is a mindset that sees reliability as everyone’s responsibility, not just the responsibility of the SRE team. It is important to create a culture where everyone is empowered to take ownership of reliability and to make decisions that will improve the reliability of the systems they work on. Embrace failure. Failure is inevitable, so it is important to create a culture where failure is seen as an opportunity to learn and improve.
read morePosts
DevSecOps vs SRE
DevSecOps and SRE are two complementary approaches to ensuring the reliability and security of software systems.
DevSecOps is a practice that integrates security into the entire software development lifecycle (SDLC). This means that security is considered from the very beginning of the development process, and it is not an afterthought. DevSecOps teams work closely with development, operations, and security teams to ensure that security is built into the code from the start.
read morePosts
Creating high performance teams
Here are some tips on how to build high-performance teams:
Start with the right people. The first step to building a high-performing team is to recruit the right people. This means finding individuals who are talented, motivated, and have the skills and experience to be successful. Set clear goals and expectations. Once you have the right people in place, it is important to set clear goals and expectations for the team.
read moreTag: Microservices
Posts
How do you deliver reliable, high-throughput, low-latency microservices
Here are some tips on how to deliver reliable, high-throughput, low-latency (micro)services:
Design your services for reliability. This means designing your services to be fault-tolerant, scalable, and resilient. You can do this by using techniques such as redundancy, load balancing, and caching. Use the right tools and technologies. There are a number of tools and technologies that can help you to deliver reliable, high-throughput, low-latency microservices. These include messaging systems, load balancers, and caching solutions.
read moreTag: Dataflow
Posts
Taming the stragglers in Google Cloud Dataflow
I’m currently bench-marking Flink against Google Cloud Dataflow using the same Apache Beam pipeline for quantitative analytics. One observation I’ve seen with Flink is the tail latency associated with some shards. Google Cloud Dataflow can optimise away stragglers in large jobs using “Dynamic Workload Rebalancing". As far as I know, Flink is currently unable to perform similar optimisations.
read moreTag: Sre
Posts
DevSecOps vs SRE
DevSecOps and SRE are two complementary approaches to ensuring the reliability and security of software systems.
DevSecOps is a practice that integrates security into the entire software development lifecycle (SDLC). This means that security is considered from the very beginning of the development process, and it is not an afterthought. DevSecOps teams work closely with development, operations, and security teams to ensure that security is built into the code from the start.
read moreTag: Debt
Posts
when technical debt becomes just debt
I’ve always hated the phrase “technical debt” as it can lead to items being banished to a backlog that are never addressed. For example, Knight Capital recently blamed a “technology issue” for a $440 million trading loss. Nanex speculate that this may have been due to someone inadvertently testing in production. Technical debt is really just debt that will be repaid in one way or another.
read moreTag: Blockchain
Posts
Blockchain - why?
The point of blockchain is to provide a secure and transparent way to store and track data. Blockchain is a distributed ledger technology that uses cryptography to secure and verify transactions. This means that data stored on the blockchain cannot be tampered with or altered without the consent of the network.
Here are some of the potential benefits of blockchain:
Security: Blockchain is very secure because it is very difficult to hack or tamper with.
read moreTag: Android
Posts
GMail notifications not working after latest update on Android/Nexus 4
Settings–>Applications–>GMail, force stop, clear data/cache fixed this for me :-)
read moreTag: Gmail
Posts
GMail notifications not working after latest update on Android/Nexus 4
Settings–>Applications–>GMail, force stop, clear data/cache fixed this for me :-)
read moreTag: Notifications
Posts
GMail notifications not working after latest update on Android/Nexus 4
Settings–>Applications–>GMail, force stop, clear data/cache fixed this for me :-)
read moreTag: Matching
Posts
Money in, money out
Design and development of a real time matching engine to provide insights into liquidity for funding forecasts and regulatory requirements.
Java, BDD/TDD, JUnit/TestNG, Maven, Coherence, Spring Core, Spring Data, Spring Boot Micro Services, IBM MQ, Maven, Jira, TeamCity
read moreTag: Basel
Posts
Risk and tribal language / Counterparty Credit Risk
Whenever you start on a new project there’s always a certain amount of tribal language to decode. A colleague of mine kept talking about a system that “calculates IRC”. When I asked what IRC was, he didn’t know….
Here’s the best reference I found for developers looking to understand Counterparty Credit Risk.
read moreTag: Counterparty Credit Risk
Posts
Risk and tribal language / Counterparty Credit Risk
Whenever you start on a new project there’s always a certain amount of tribal language to decode. A colleague of mine kept talking about a system that “calculates IRC”. When I asked what IRC was, he didn’t know….
Here’s the best reference I found for developers looking to understand Counterparty Credit Risk.
read moreTag: Cva
Posts
Risk and tribal language / Counterparty Credit Risk
Whenever you start on a new project there’s always a certain amount of tribal language to decode. A colleague of mine kept talking about a system that “calculates IRC”. When I asked what IRC was, he didn’t know….
Here’s the best reference I found for developers looking to understand Counterparty Credit Risk.
read moreTag: Finance
Posts
Risk and tribal language / Counterparty Credit Risk
Whenever you start on a new project there’s always a certain amount of tribal language to decode. A colleague of mine kept talking about a system that “calculates IRC”. When I asked what IRC was, he didn’t know….
Here’s the best reference I found for developers looking to understand Counterparty Credit Risk.
read moreTag: Eclipse
Posts
Auto create builder pattern code in eclipse
Looking at legacy code with lots of unwiedly constructors. Whilst mocking needing something to create all the boiler plate builder code and found this plugin that works with Juno too. http://code.google.com/p/bpep/
read moreTag: Xml Java Mapping Jpa
Posts
Ignoring fields in JPA with xml mappings
You’ve got a xml file that you need to persist to the database and you don’t really want to use low level parsing and jdbc.
Use trang to create a xml schema for the xml files Use xjc to create the java objects Use jpa with xml mapping to persist so there’s no need to touch the generated source form xjc. For fields you don’t care about map them as transient.
read moreTag: Java Eclipse
Posts
Faster sts startup
1/ Disable RSS feeds
Preferences -> Spring -> Dashboard (just delete the entries in the textbox)
2/ Disable maven update on startup
Preferences-> Download repo index updates on startup.
read moreTag: Java Eclipse Log4j
Posts
log4j.properties ConversionPattern in Eclipse
log4j.appender.stdout.layout.ConversionPattern=%-5p %40.40c{2} - %m%n
read moreTag: Java Gwt
Tag: Gwt
Posts
How to get GXT explorer running in Eclipse
Download the latest jars from http://www.sencha.com/products/extgwt/download/ Follow the “setup.txt” create a eclipse project. Add all the folders in samples/**/src as source folder Expand the samples/examples.war into the “war” folder in your eclipse dir Delete the old “gxt.jar” from the WEB-INF/lib and replace with the gxt-2.2.3-gwt22.jar (seems like an old version is bundled with the samples.war) Run As, Web app, explorer.html, cross fingers…
read moreTag: Gxt
Posts
How to get GXT explorer running in Eclipse
Download the latest jars from http://www.sencha.com/products/extgwt/download/ Follow the “setup.txt” create a eclipse project. Add all the folders in samples/**/src as source folder Expand the samples/examples.war into the “war” folder in your eclipse dir Delete the old “gxt.jar” from the WEB-INF/lib and replace with the gxt-2.2.3-gwt22.jar (seems like an old version is bundled with the samples.war) Run As, Web app, explorer.html, cross fingers…
read moreTag: Coherence
Posts
Coherence in the real world
This has really helped in the Coherence projects I’ve been working on. Nothing quite like real world experience. More resources here.
read moreTag: Irswap
Posts
Delta risk
QuantLib is a free and open-source software library for quantitative finance. It provides a wide range of functionality for pricing and risk-managing financial derivatives, including interest rate swaps.
To calculate the delta risk of an interest rate swap in Python using QuantLib, you can follow these steps:
Import the necessary QuantLib modules: Python
import QuantLib as ql Create a QuantLib YieldTermStructure object to represent the current interest rate curve: Python
read more