Comparative Analysis of Open-Source AI Agent Libraries

Comparing Agentic, Axllm, Instructor, GPT-Researcher, and LangChain: An In-Depth Analysis of Open-Source AI Libraries

Date: 08/07/2024


In the rapidly evolving landscape of artificial intelligence (AI), open-source AI libraries have become crucial tools for developers and researchers. This report delves into a comparative analysis of five prominent open-source AI libraries: AgenticAxLLMInstructorGPT-Researcher, and LangChain. Each of these libraries offers unique features and capabilities, catering to diverse use cases and development needs.

Agentic stands out for its flexible approach to agent creation and management, providing robust support for both synchronous and asynchronous execution. It excels in creating complex workflows by chaining multiple agents together, making it suitable for multi-step processes. AxLLM, on the other hand, offers a streamlined API for creating and managing AI agents, focusing on simplicity and ease of integration into existing applications.

Instructor differentiates itself through a focus on structured outputs and type validation, ensuring consistency and reliability in AI-generated responses. This makes it particularly valuable for applications requiring strict data formats. GPT-Researcher is designed specifically for autonomous research tasks, offering specialized agents for information gathering, analysis, and report writing. LangChain provides a comprehensive framework for building applications with language models, featuring advanced tools for memory management, prompt templates, and external data source integration.

The objective of this report is to provide an in-depth comparison of these libraries, evaluating their architectural designs, integration capabilities, performance, scalability, and community support. By examining their strengths and weaknesses, this analysis aims to guide developers in choosing the appropriate library for their specific project requirements.

Table of Contents

  • Feature Comparison and Use Cases
    • Agent Creation and Management
    • Language Model Integration
    • Task Execution and Workflow Management
    • Memory and Context Management
    • External Tool Integration
    • Use Cases
      • General-Purpose AI Applications
      • Specialized Research and Analysis
      • Complex AI Systems and Workflows
    • Comparison with Traditional Approaches
  • Architectural and Integration Insights
    • Framework Architectures
      • Agentic
      • Axllm
      • Instructor
    • Comparison with GPT-Researcher and LangChain
      • GPT-Researcher
      • LangChain
    • Integration Capabilities
      • API Compatibility
      • Extensibility
      • Ecosystem Integration
    • Performance and Scalability
      • Agentic
      • Axllm
      • Instructor
    • Use Case Suitability
      • Agentic
      • Axllm
      • Instructor
    • Development Experience
      • Agentic
      • Axllm
      • Instructor
    • Future Outlook and Trends
  • Community and Development Experience
    • Open Source Collaboration and Contribution
      • Agentic
      • Axllm
      • Instructor
    • Development Experience Comparison
      • Ease of Use and Learning Curve
      • Integration and Ecosystem Compatibility
      • Performance and Scalability
    • Community Support and Resources
      • Documentation and Learning Materials
    • Comparison with GPT-Researcher and LangChain

Feature Comparison

Agent Creation and Management

Agentic offers a flexible approach to agent creation, allowing developers to define custom agents with specific roles and capabilities. It supports both synchronous and asynchronous execution, making it suitable for various use cases. Agentic’s agents can be easily composed and chained together, enabling complex workflows.

AxLLM focuses on providing a simple API for creating and managing AI agents. It offers a more streamlined approach compared to Agentic, with built-in support for common agent types and tasks. AxLLM’s agents are designed to be easily integrated into existing applications and workflows.

Instructor takes a different approach by focusing on structured outputs and type validation. It allows developers to define the expected structure of AI responses, ensuring consistency and reliability in agent outputs. This feature is particularly useful for applications requiring strict data formats.

In contrast, GPT-Researcher is designed specifically for autonomous research tasks. It creates specialized agents for different research stages, such as information gathering, analysis, and report writing. This focused approach sets it apart from the more general-purpose libraries mentioned above.

LangChain provides a comprehensive framework for building applications with language models. It offers a wide range of tools and components for creating complex agent systems, including memory management, prompt templates, and integration with external data sources.

Language Model Integration

Agentic supports integration with various language models, including OpenAI’s GPT models and open-source alternatives. It provides a unified interface for interacting with different LLMs, allowing developers to switch between models easily.

AxLLM is designed to work primarily with OpenAI’s models but also supports other providers. It offers a simplified API for model interactions, abstracting away much of the complexity associated with direct LLM usage.

Instructor is model-agnostic and can work with any language model that supports function calling. This flexibility allows developers to use their preferred LLM while benefiting from Instructor’s structured output capabilities.

GPT-Researcher is optimized for use with OpenAI’s GPT-4, leveraging its advanced capabilities for research tasks. However, it can be adapted to work with other models with similar capabilities.

LangChain supports a wide range of language models and provides abstractions to easily switch between different providers. This flexibility makes it a popular choice for developers working with multiple LLMs.

Task Execution and Workflow Management

Agentic excels in creating complex workflows by allowing developers to chain multiple agents together. It supports both sequential and parallel execution of tasks, making it suitable for complex, multi-step processes.

AxLLM provides a more straightforward approach to task execution, focusing on single-agent tasks and simple workflows. It’s well-suited for applications that require quick integration of AI capabilities without complex agent interactions.

Instructor’s focus on structured outputs makes it particularly useful for tasks that require consistent, well-formatted data. It’s ideal for applications that need to process and validate AI-generated content before use.

GPT-Researcher implements a specialized workflow for research tasks, automating the entire process from query understanding to report generation. This makes it highly effective for its intended use case but less flexible for general-purpose applications.

LangChain offers robust tools for creating complex workflows, including its Agents and Tools framework. It allows for the creation of multi-step processes with branching logic and external tool integration.

Memory and Context Management

Agentic provides basic memory management capabilities, allowing agents to maintain context across multiple interactions. However, its memory features are not as advanced as some other frameworks.

AxLLM offers simple context management, primarily focused on maintaining conversation history for individual agents. It doesn’t provide advanced memory features out of the box.

Instructor doesn’t focus on memory management, as its primary purpose is structured output generation. Developers would need to implement their own memory solutions when using Instructor.

GPT-Researcher implements task-specific memory management, allowing it to maintain context throughout the research process. This is crucial for generating coherent and comprehensive research reports.

LangChain offers advanced memory management features, including various memory types (e.g., conversation buffer, summary memory) and the ability to integrate with external databases for long-term storage.

External Tool Integration

Agentic supports integration with external tools and APIs, allowing agents to access and manipulate external data sources. This feature enables the creation of more powerful and versatile agents.

AxLLM provides basic support for external tool integration, primarily through its API interface. However, it doesn’t offer as extensive a toolkit as some other frameworks.

Instructor doesn’t focus on external tool integration, as its primary purpose is structured output generation. Developers would need to implement their own integration solutions when using Instructor.

GPT-Researcher includes built-in integrations with web search engines and other research tools, enabling comprehensive information gathering and analysis.

LangChain excels in external tool integration, offering a wide range of pre-built tools and the ability to create custom tools. This makes it highly versatile for creating agents that can interact with various external systems and data sources.

Use Cases

General-Purpose AI Applications

Agentic is well-suited for building complex, multi-agent systems that require flexible workflows and integration with various external tools. It’s ideal for applications such as:

  • Automated customer service systems with multiple specialized agents
  • AI-powered project management tools
  • Complex data analysis and reporting systems

AxLLM is best for quickly adding AI capabilities to existing applications or building simple AI-powered tools. Potential use cases include:

  • Chatbots for websites or messaging platforms
  • AI-assisted content generation tools
  • Simple question-answering systems

Instructor shines in applications that require structured, validated outputs from language models. It’s particularly useful for:

  • Form filling and data extraction from unstructured text
  • Generating structured data for database population
  • Creating consistent API responses in AI-powered services

Specialized Research and Analysis

GPT-Researcher is specifically designed for autonomous research tasks. Its use cases are focused but powerful:

  • Automated literature reviews and state-of-the-art analysis
  • Market research and competitor analysis
  • Trend analysis and forecasting in various domains

Complex AI Systems and Workflows

LangChain’s comprehensive feature set makes it suitable for a wide range of complex AI applications, including:

  • Advanced conversational AI systems with memory and reasoning capabilities
  • AI-powered document analysis and summarization tools
  • Multi-step data processing and analysis pipelines

Comparison with Traditional Approaches

Compared to traditional software development approaches, these libraries and frameworks offer significant advantages in building AI-powered applications:

  • Reduced development time and complexity
  • Easier integration of advanced AI capabilities
  • More flexible and adaptable systems

However, they also come with challenges, such as:

  • Potential inconsistency in AI-generated outputs
  • Need for careful prompt engineering and model fine-tuning
  • Ethical considerations in AI decision-making processes

In conclusion, while GPT-Researcher and LangChain offer more comprehensive solutions for complex AI systems, libraries like Agentic, AxLLM, and Instructor provide valuable tools for specific use cases and development approaches. The choice between these options depends on the specific requirements of the project, the desired level of control over the AI system, and the developer’s familiarity with different frameworks.

Architectural and Integration Insights

Framework Architectures


Agentic is designed as a lightweight TypeScript framework for building AI agents. Its architecture focuses on simplicity and flexibility, allowing developers to create custom agents with ease. Key architectural features include:

  • Modular design with composable components
  • Event-driven architecture for agent interactions
  • Support for multiple LLM backends (OpenAI, Anthropic, etc.)
  • Built-in memory management and context handling

Agentic’s integration approach is minimalistic, requiring only a few lines of code to set up and run an agent. This makes it particularly suitable for rapid prototyping and small to medium-scale projects.


Axllm takes a more comprehensive approach, offering a full-stack framework for building AI-powered applications. Its architecture is characterized by:

  • Unified API for multiple LLMs and vector databases
  • Built-in caching and optimization mechanisms
  • Extensible plugin system for custom functionalities
  • Robust error handling and logging capabilities

Axllm’s integration strategy focuses on providing a seamless experience across different LLMs and databases, making it easier for developers to switch between providers or use multiple services simultaneously.


Instructor adopts a unique architecture centered around structured outputs. Its key architectural elements include:

  • Type-safe output parsing using Zod schemas
  • Function calling capabilities for complex interactions
  • Integration with popular TypeScript frameworks (Next.js, Express, etc.)
  • Support for streaming responses and partial results

Instructor’s integration approach emphasizes type safety and structured data, making it particularly suitable for projects requiring strict data validation and complex output structures.

Comparison with GPT-Researcher and LangChain


GPT-Researcher is an autonomous AI agent for comprehensive online research. Its architecture differs significantly from the three libraries mentioned above:

  • Focused on autonomous research tasks
  • Incorporates web scraping and information synthesis
  • Uses a multi-agent system for task decomposition
  • Includes a report generation module

While GPT-Researcher is highly specialized, it shares some similarities with Agentic in terms of agent-based design. However, its integration is more complex due to its specific research-oriented features.


LangChain is a comprehensive framework for developing applications with LLMs. Its architecture is more extensive and feature-rich compared to the other libraries:

  • Modular components for various LLM tasks (prompts, chains, agents, etc.)
  • Extensive integrations with external tools and services
  • Support for advanced memory and retrieval systems
  • Built-in evaluation and debugging tools

LangChain’s integration approach is more holistic, providing a wide range of tools and components that can be combined to create complex AI applications. This makes it more suitable for large-scale projects but potentially more complex for simple use cases.

Integration Capabilities

API Compatibility

  • Agentic: Supports multiple LLM providers through a unified API, similar to LangChain but with a simpler interface.
  • Axllm: Offers a unified API for LLMs and vector databases, providing seamless integration across different services.
  • Instructor: Focuses on OpenAI’s API but provides robust type-safe integrations with popular TypeScript frameworks.


  • Agentic: Highly extensible through its modular design, allowing easy addition of custom components.
  • Axllm: Provides a plugin system for extending functionality, similar to LangChain’s approach but with a focus on full-stack applications.
  • Instructor: Extensibility is centered around output parsing and function calling, making it highly adaptable for structured data scenarios.

Ecosystem Integration

  • Agentic: Lightweight integration with existing JavaScript/TypeScript ecosystems, suitable for projects already using popular frameworks.
  • Axllm: Comprehensive integration capabilities, including database connectors and built-in caching mechanisms.
  • Instructor: Seamless integration with TypeScript projects, particularly those using Zod for schema validation.

Performance and Scalability


  • Lightweight design allows for efficient resource usage
  • Suitable for small to medium-scale applications
  • May require additional optimizations for large-scale deployments


  • Built-in caching and optimization mechanisms enhance performance
  • Designed to handle large-scale applications with multiple LLMs and databases
  • Potential for higher resource usage due to comprehensive feature set


  • Focus on type-safe parsing may introduce slight overhead
  • Efficient for applications requiring structured outputs
  • Streaming capabilities allow for handling large responses efficiently

Use Case Suitability


  • Ideal for rapid prototyping of AI agents
  • Well-suited for chatbots and conversational AI applications
  • Effective for projects requiring custom agent behaviors


  • Excellent for full-stack AI applications
  • Suitable for projects utilizing multiple LLMs and vector databases
  • Effective for applications requiring robust caching and optimization


  • Perfect for applications requiring strict type safety and structured outputs
  • Well-suited for complex function calling scenarios
  • Ideal for integration with existing TypeScript projects

Development Experience


  • Simple API with a short learning curve
  • Extensive documentation and examples available
  • Active community support and regular updates


  • Comprehensive documentation with detailed guides
  • Steeper learning curve due to extensive feature set
  • Growing community and ecosystem


  • Strong focus on developer experience with TypeScript integration
  • Excellent type safety features reduce runtime errors
  • Comprehensive documentation with practical examples

As of July 2024, the AI development landscape continues to evolve rapidly. The three libraries discussed here represent different approaches to AI integration, each with its strengths:

  • Agentic is likely to gain traction in the rapid prototyping and small-scale AI agent development space.
  • Axllm is positioned to become a major player in full-stack AI application development, potentially challenging LangChain’s dominance.
  • Instructor is set to become the go-to solution for TypeScript developers requiring structured outputs and type safety in AI applications.

As the field progresses, we can expect these libraries to expand their features and integration capabilities, potentially leading to convergence in some areas while maintaining their unique strengths.

Community and Development Experience

Open Source Collaboration and Contribution


Agentic has garnered significant attention within the open-source community, boasting over 1,700 stars on GitHub as of July 2024. The project’s active development is evident through its frequent commits and releases. The community engagement is fostered through:

  • Regular issue discussions and pull requests
  • A dedicated Discord server for real-time collaboration
  • Comprehensive documentation and examples to aid developers

The project maintainers have established clear contribution guidelines, encouraging developers to participate in various aspects, from bug fixes to feature enhancements. This open approach has resulted in a diverse set of contributors, ranging from individual developers to representatives from tech companies.


Axllm has a growing community, albeit smaller than Agentic’s. The project’s GitHub repository shows steady activity, with:

  • A modest but engaged group of contributors
  • Regular updates and version releases
  • An active issue tracker for bug reports and feature requests

The Axllm team has implemented a welcoming onboarding process for new contributors, including detailed setup instructions and a contributor’s guide. This approach has helped attract developers interested in working with large language models (LLMs) in a more accessible framework.


Instructor has carved out a niche in the JavaScript ecosystem for LLM development. Its community is characterized by:

  • A focus on JavaScript and TypeScript developers
  • Active engagement on platforms like Stack Overflow and GitHub Discussions
  • Regular meetups and webinars organized by core contributors

The project’s documentation is particularly praised for its clarity and extensive examples, making it easier for newcomers to get started. This has contributed to a steady growth in the number of developers adopting Instructor for their LLM projects.

Development Experience Comparison

Ease of Use and Learning Curve

Agentic stands out for its intuitive API design, making it accessible to developers with varying levels of experience in AI and LLMs. Its modular architecture allows for easy customization and extension of functionalities. However, the breadth of features can be overwhelming for absolute beginners.

Axllm takes a more streamlined approach, focusing on simplicity and ease of integration. Its documentation includes step-by-step tutorials and interactive examples, reducing the learning curve for developers new to LLM development. This approach has made it popular among startups and small teams looking for quick implementation.

Instructor’s JavaScript-centric design caters specifically to web developers, offering a familiar environment for those already working with Node.js or browser-based applications. Its type-safe approach using TypeScript has been particularly appreciated by developers coming from strongly-typed languages.

Integration and Ecosystem Compatibility

Agentic excels in its broad ecosystem compatibility, offering integrations with popular AI services and tools. This includes seamless connections to:

  • OpenAI’s GPT models
  • Hugging Face’s model hub
  • Various vector databases for efficient data retrieval

Axllm focuses on providing a unified interface for different LLM providers, simplifying the process of switching between models or experimenting with different AI services. This approach has been well-received by researchers and developers who need to compare model performances.

Instructor’s tight integration with the JavaScript ecosystem sets it apart. It offers:

  • Native support for popular frameworks like React and Vue.js
  • Easy integration with Node.js backend services
  • Compatibility with serverless platforms like Vercel and Netlify

Performance and Scalability


Agentic has demonstrated robust performance in handling complex AI workflows. Its architecture is designed to scale horizontally, allowing for distributed processing of large-scale LLM tasks. Benchmarks conducted by the community have shown:

  • Efficient handling of concurrent requests
  • Low latency in multi-step reasoning tasks
  • Effective memory management for long-running processes

However, some users have reported challenges in fine-tuning performance for very large datasets, indicating areas for potential improvement.


Axllm’s focus on simplicity extends to its performance optimizations. The library includes:

  • Built-in caching mechanisms to reduce redundant API calls
  • Efficient token management to minimize costs when using commercial LLM services
  • Lightweight design that minimizes overhead in resource-constrained environments

These features have made Axllm a popular choice for edge computing and IoT applications involving LLMs.


Instructor leverages JavaScript’s asynchronous capabilities to provide excellent performance in I/O-bound tasks. Its design principles include:

  • Efficient streaming of LLM outputs for real-time applications
  • Optimized memory usage through careful management of model states
  • Support for WebWorkers to offload heavy computations in browser environments

These optimizations have resulted in Instructor being favored for building responsive, LLM-powered web applications.

Community Support and Resources

Documentation and Learning Materials

Agentic offers comprehensive documentation, including:

  • Detailed API references
  • Conceptual guides explaining core principles
  • A growing collection of tutorials and use cases

The project also maintains a blog with regular updates on new features and best practices, contributing to a rich learning ecosystem.

Axllm’s documentation is praised for its clarity and practical approach. It includes:

  • Interactive code examples that can be run directly in the browser
  • A series of video tutorials covering various aspects of the library
  • Regular workshops and webinars for hands-on learning

Instructor’s documentation stands out for its focus on real-world applications. It provides:

  • End-to-end project examples demonstrating common use cases
  • Detailed performance optimization guides
  • Integration tutorials for popular JavaScript frameworks

Comparison with GPT-Researcher and Langchain

When compared to GPT-Researcher and Langchain, Agentic, Axllm, and Instructor offer more specialized approaches:

  • Agentic provides a more flexible framework for building custom AI agents, similar to GPT-Researcher, but with a broader scope beyond research tasks.
  • Axllm focuses on simplifying LLM interactions, offering a more streamlined alternative to Langchain’s comprehensive but complex ecosystem.
  • Instructor’s JavaScript-first approach contrasts with Langchain’s Python-centric design, catering to a different developer demographic.

All three libraries emphasize ease of use and rapid development, addressing some of the complexity challenges associated with Langchain. However, they may lack some of the advanced features and extensive integrations that Langchain offers.

GPT-Researcher’s specialized focus on autonomous research tasks sets it apart from the more general-purpose design of Agentic, Axllm, and Instructor. While these libraries can be used to build similar research agents, they require more custom development to achieve the same level of task-specific functionality.

In terms of community size and maturity, Langchain still leads, but Agentic, Axllm, and Instructor are rapidly growing, each carving out its niche in the LLM development ecosystem.


The comparative analysis of Agentic, AxLLM, Instructor, GPT-Researcher, and LangChain reveals a diverse landscape of open-source AI libraries, each offering unique advantages tailored to different development needs. Agentic’s modular design and flexibility make it ideal for complex, multi-agent systems, while AxLLM’s streamlined approach simplifies the integration of AI capabilities into existing applications.

Instructor’s emphasis on structured outputs and type safety is particularly beneficial for applications requiring consistent data formats and reliable AI-generated content. GPT-Researcher’s specialized focus on autonomous research tasks positions it as a powerful tool for comprehensive research and analysis, automating the entire process from information gathering to report generation. LangChain, with its extensive feature set and advanced memory management, is well-suited for building sophisticated AI applications that require robust external tool integration and complex workflows.

In terms of community and development experience, all five libraries demonstrate active engagement and support, with varying levels of community involvement. Agentic and LangChain have established large and active communities, while AxLLM and Instructor are steadily growing, attracting developers with their ease of use and targeted functionalities. GPT-Researcher, though more niche, offers significant value for its intended use case.

Ultimately, the choice between these libraries depends on the specific requirements of the project, the desired level of control over the AI system, and the developer’s familiarity with different frameworks. This comparative analysis underscores the importance of selecting the right tool to leverage the full potential of AI in various applications.


Written By

Introducing LabelLens: Your Smart Guide to Healthier Food and Product Choices

Introducing LabelLens: Your Smart Guide to Healthier Food Choices

In today’s world of endless food options, making informed decisions about what we eat has never been more important – or more challenging. Enter LabelLens, an innovative web app designed to demystify food and product labels and empower consumers to take control of their wellness.

LabelLens is revolutionizing the way we shop for food by instantly decoding ingredient lists and nutritional information. With just a quick scan of a product label using your smartphone or device camera, LabelLens provides a clear, easy-to-understand breakdown of what’s really in your food or other product.

LabelLens - AI Product Label Analysis

Key Features:

  1. Instant Ingredient Analysis: Simply snap a photo of any food label ingredients or nutrition sections, and LabelLens will quickly identify and explain all ingredients it sees, highlighting any potential concerns.
  2. Wellness Insights: Get a recommended health rating and analysis for each product, helping you make smarter choices aligned with your dietary and wellness goals.
  3. Multilingual Support: LabelLens can recognize and translate food labels in multiple languages, making it an invaluable tool for travelers or when shopping for international products.
  4. Customizable Alerts: Set up personalized ingredient watchlists to easily avoid allergens or other ingredients you’re trying to cut back on. Additionally, you can also scan for healthy ingredients you want to focus on. 
  5. Save and Compare: Build a database of your scanned products, allowing you to easily track and compare different options over time.

The user-friendly interface, as shown in the images, makes navigating the app a breeze. Whether you’re viewing your saved products, checking your ingredient watchlist, or exploring subscription options, LabelLens keeps everything organized and accessible.

LabelLens Product Scan LabelLens Product Scan Results

LabelLens isn’t just for individuals – it’s a game-changer for families, health-conscious shoppers, and anyone with dietary restrictions or allergies. By providing clear, actionable information about the food we buy, LabelLens empowers users to make choices that align with their health goals and values.

Ready to transform your shopping experience? LabelLens offers flexible subscription plans to suit various needs, from casual users to nutrition enthusiasts. And for those curious to try it out, there’s even a free trial option to get you started.

In a world where what we eat impacts our health, energy, and overall well-being, LabelLens serves as your personal product label guide, always ready to help you navigate the complex world of food and product labels. It’s time to shop smarter, eat better, and take charge of your wellness with LabelLens.

Visit today to start your journey towards more informed, healthier food and product choices!

Full Flap Boogie

Touring the world with Location Manager for some Full Flap Boogie action in the Got Friends DoubleEnder.

Content Links:


Flight Recorder: Sky Dolly 

Music by Sonicviz & Buck Brown “Headed into Town”


The Power of Repetition and Interleaved Learning in MSFS

Today I want to talk about how repetition and interleaved learning can help you master the sim piloting skills of take offs and landings using MSFS.

Repetition is the act of doing something over and over again until it becomes automatic. Interleaved learning is the practice of switching between different topics or skills in a random or varied order. Both of these methods have been shown to improve retention and transfer of knowledge and skills in various domains, including aviation.

Why are repetition and interleaved learning important for take offs and landings?

Well, these are two of the most critical and challenging phases of flight, and they require a lot of coordination, precision, and situational awareness. They also vary depending on the type of aircraft, the weather conditions, the airport layout, and the traffic situation. Therefore, it is not enough to just learn how to do a take off or a landing once and then forget about it. You need to practice them frequently and in different scenarios to build your confidence and competence.

MSFS is a realistic and immersive flight simulator that allows you to fly anywhere in the world with any aircraft you want. You can also customize the weather, the time of day, the traffic, and the failures to create realistic and challenging situations. MSFS addons like location manager and aircraft manager provide features that let you save your favorite locations and aircraft settings for easy access.

For example, let’s say you want to practice take offs and landings at KLAX Los Angeles International Airport in California, USA. You can use the location manager to save this airport as one of your favorites, and it will automatically show you how many runways and parking spots are available, as well as the ILS frequencies if any. You can also use the aircraft manager to save your favorite aircraft types, livery, fuel load, weight and balance, etc.

KLAX ILS Training with Location Manager

KLAX ILS Training with Location Manager

Then, you can use the location manager toolbar in fly mode to quickly switch between different runways and parking spots without having to go back to the main menu. This way, you can practice take offs and landings from different directions and distances, with different wind speeds and directions, with different traffic patterns, etc. You can also use the aircraft manager weight and balance toolbar additions to change your aircraft settings on the fly, such as changing the fuel, passenger, or cargo load.

Changing Weight and Balance presets

Changing Weight and Balance presets for quick aircraft reconfiguration

By doing this, you are applying repetition and interleaved learning principles to your simulation based training. You are repeating the same skill (take off or landing) multiple times until it becomes second nature. You are also interleaving different variables (runway, parking spot, weather, time of day/night, position, distance, bearing, height, speed ) to make your practice more varied and challenging. This will help you improve your memory, adaptability, and problem-solving skills. You can also complelty randomise all these variables to really test your skills.

If you want to see how this works in action, check out this video where I demonstrate how to use the location manager and aircraft manager features in MSFS. I also show you some examples of how I practice take offs and landings at Bora Bora Airport in French Polynesia using these features, in addition to pointing out other features using Lukla (height AGL estimation) and KLAX (ILS training).

You can use it for all sorts of scenario based training:

  • Take offs
  • Take off emergency procedures
  • Landings
  • Go around
  • Landing emergency procedures
  • Varied landing approaches
  • ILS familiarisation and training
  • Whatever you can come up with!

To get the best out of Location Manager it’s best to watch the above video, and also refer to the extensive notes in the Tips section: How to best use Location Manager 

Future improvements to this process could involve things like:

  • Improved failure triggering in take off/landing (currently set via the failure menu before flight)
  • Traffic issues impacting the pattern sequence
  • ATC instructions
  • [insert here]

Will see how things progress! I hope you enjoyed this blog post and learned something new.

Feel free to send comments and feeback via the Contact Form, I’d love to hear from you.

Until next time, happy flying!.

From Flight Sim to Digital Twin: The Evolution of MSFS

I recently gave a talk about past, present, future of MSFS at Nerd Nite Tokyo, titled “From Flight Sim to Digital Twin: The Evolution of MSFS

There’s a lot of things that you could talk about when projecting the future of what MSFS could be, based on the progress made over 40+ years.

However, a lot of these things are already recognised, roadmapped, and already under development eg. Flight Modelling, AI content gen, weather (aerial only generally) , seasons, SDK, AI traffic, ATC, etc etc

I chose to focus on one aspect that currently doesn’t seem to be on the radar and/or roadmap – or discussed much at all except for occasional lonely threads in the MSFS forums.

That focus aspect was water simulation, which I think is a key element in taking MSFS to the next level out to 2030 and beyond.

Realistic water simulation is critical for lots of reasons, two in particular:

  • Modelling the full behavior of natural systems
  • Provide realistic environments for navigation

Here’s one good use case:
ref: Queensland floods: Burketown residents warned of crocodile-infested waters ahead of expected peak | Queensland | The Guardian

What you can currently do in MSFS:

Ditto for the tides.

I gave two other use cases as examples which are even more critical:

  • Sea Level Rise
  • Simulating the global underwater environment

I’ll post the video link if it gets posted, or I’ll post the slides instead.

It would be good to get more of a conversation going about the need for water simulation in MSFS to get some more focus and visibility.

Location Manager for MSFS – Development Overview

Location Manager for MSFS – The Challenge of Enhancing the UI/UX

Following on from my previous post about Aircraft Manager for MSFS – Development Overview , here is a short overview of how the development of Location Manager panned out. To recap what the problem is: MSFS is like an interactive earth simulator, enabled by flying around complex mini-simulations in the form of aircraft across the whole history of flight. The two methods for currently managing locations in MSFS are show below.

Inbuilt MSFS method 1 for location management

Inbuilt MSFS method 1 for location management

Inbuilt MSFS method 2 for location management

Inbuilt MSFS method 2 for location management

But there’s no way to save and/or favorite locations in the sim itself. While you can save locations outside MSFS in all sorts of ways (OneNote, LittleNavMap, Excel, etc) it’s not the most convenient of methods if you want to build up a log of world exploration discoveries for easy access within the simulator. This isn’t the same as loading a saved flight plan either. There are many use cases where you don’t actually want to use a flight plan, but just go to a location and start flying from there. So the intent here was to derive a solution to solve this problem.

Step 1 – Validate the Technical Approach

In development, it’s more efficient if you can leverage past knowledge and skills so you don’t have to reinvent the wheel every time. Having just completed Aircraft Manager for MSFS, I’d climbed the learning curve on both the MSFS UI framework as well as some Javascript solutions to integrate into that. So it seemed logical to ask, how could I apply that to the unique problem of location management? First off, just as in the previous post, I needed to validate some technical issues before diving into the prototyping stage. This was a very different project from Aircraft Manager, as that relied mainly on the inbuilt aircraft database supplied by the sim. In the case of Location Manager for MSFS, one of the big issues here was we needed to develop a robust way to save locations with associated user data via the inbuilt MSFS UI storage system. There were also unanswered questions as to how robust this storage system is in terms of how much data can be safely stored, on both PC and XBox. I have received some advice from one of the Asobo devs on this (thanks!) but still need a little more clarity, so need to keep an eye on this. In summary, some of the biggest technical issues to be validated were:

  • How to capture and save the location information from the MSFS UI
  • Where would be the best place to insert the UI to manage saved locations
  • How to replicate the map zoom functionality in MSFS for saved locations (this was tricky!)
  • How to efficiently store locations for easy retrieval, display, updates, and deletion

So I got to work, solving each of these unknowns in turn, or at least getting me into the ballpark of “this looks doable in a performant manner so the projects not dead and I can keep going”. Like the previous project, there were a number of dead ends and frustration moments which necessitated some creative thinking in order to spark the idea for another way to solve the problem. Some of these solutions come from gaining a deeper understanding of the MSFS UI framework, some from creatively applying workarounds. A workaround is a little different from a hack btw. A hack is typically a short term fix, designed as a purely quick and dirty temporary measure while you search for a better more reliable long term solution to a problem. A workaround is a solution you have confidence in being robust enough for long term use, even though it’s not how you originally envisioned it working. Workarounds are always preferable to hacks, giving you better maintainability over the long term.

Step 2 – Validate the Prototype

After gaining sufficient confidence I had working solutions to the core technical issues, I could start constructing a full prototype incorporating these into a working demo. This lets you test & tweak the full workflow, and weed out other issues/bugs.

Early Prototype of the Map interface

Early Prototype of the Map interface

Early Prototype of the location management interface

Early Prototype of the location management interface

Location Manager for MSFS Free Demo

The next stage from this is to get a demo into peoples hands to get wider feedback, while continuing to tweak workflows and solve issues/bugs as they appear. Similar to AM, I fired up a thread at the Flight Simulator forums to see if anyone is interested in trying it out. Have at it, all feedback is welcome: The Good, The Bad, and The Ugly. You can download the free demo lite version from the Location Manager for MSFS project page. Like the AM demo, it’s fully functional for casual use cases, with no obligation to upgrade. Enjoy!

Location Manager for MSFS Pro Release

Version 1 of LM Pro

Version 1 of LM Pro

Location Manager Pro V1.0 was released on 2023-02-14. It includes a toolbar widget for accessing LM from fly mode, so you can teleport to saved locations as well as add new one. The release announcement on the MSFS Forums is here @ Aircraft Manager Pro + Location Manager Pro now available.

Start Anywhere in MSFS

It was quickly followed up with another solution to a long standing request from users of MSFS: the ability to start cold and dark from anywhere. You are currently limited to being able to start only from airports or in the air.

With V1.0.3 of Location Manager we introduce Start Anywhere, where you can teleport to saved locations in Fly mode and start on land, water, or air to begin your flight experience.

More details in the video below:

Happy Flying!

Pin It on Pinterest