<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.35 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-akhavain-moussa-ai-network-02" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.33.0 -->
  <front>
    <title abbrev="AI-Internet">AI Network for Training, Inference, and Agentic Interactions</title>
    <seriesInfo name="Internet-Draft" value="draft-akhavain-moussa-ai-network-02"/>
    <author fullname="Arashmid Akhavain">
      <organization>Huawei Canada</organization>
      <address>
        <email>arashmid.akhavain@huawei.com</email>
      </address>
    </author>
    <author fullname="Hesham Moussa">
      <organization>Huawei Canada</organization>
      <address>
        <email>hesham.moussa@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="April" day="29"/>
    <area>Internet</area>
    <keyword>AI Network</keyword>
    <keyword>Agentic Networks</keyword>
    <keyword>AI inference</keyword>
    <keyword>AI training</keyword>
    <keyword>AI Ecosystem Framework</keyword>
    <keyword>AI Ecosystem Requirements</keyword>
    <abstract>
      <?line 58?>

<t>Artificial Intelligence (AI) is rapidly transforming industries and everyday life, fueled by advances in model architectures, training paradigms, and data infrastructure for generation and consumption. However, the effectiveness and reliability of AI depend on two foundational processes: training and inference. Each process introduces unique challenges related to data management, computation, connectivity, privacy, trust, security, and governance.</t>
      <t>In this draft, we introduce the Data and Agent Aware-Inference and Training Network (DA-ITN)—a unified, intelligent, multi-plane framework designed to address the full spectrum of requirements needed to enable various services and interactions within the AI ecosystem. DA-ITN provides a scalable and adaptive infrastructure that connects AI clients, data providers, model providers, agent providers,  service facilitators, and computational resources to support end-to-end training, inference, and agentic interaction lifecycle operations. The architecture features dedicated control, data, and operations &amp; management (OAM) planes to ensure reliability, transparency, and accountability between interacting parties. The proposed framework is not intended for end-to-end standardization, but is intended to serve as a reference framework for the AI ecosystem of the future. Various protocols for the different building blocks shall be defined to enable different functionalities.</t>
    </abstract>
  </front>
  <middle>
    <?line 66?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>AI has become a major focus in recent years, with its influence rapidly expanding from everyday tasks like scheduling to complex areas such as healthcare. This growth is largely driven by advances in model architectures, training paradigms, and data infrastructure for generation and consumption. For example, large language models (LLMs) like ChatGPT, Claude, Grok, and DeepSeek, which are now widely used for tasks such as text generation, translation, reasoning, coding, and data analysis, highlight AI’s transformative power to boost productivity and simplify real-life applications. As such, it is clear that AI and machine learning are not passing trends but lasting and evolving forces that will only continue to evolve. For clarity, in this draft, the term AI refers broadly to all types of models—from simple classification systems to advanced general intelligence models.</t>
      <t>However, it is crucial to recognize that the success of AI systems relies on successful interaction between various components to enable various services including training, inference, agentic interactions, data governance, etc. However, to support such interactions, a number of factors and moving parts need to be carefully coordinated, designed, and managed to ensure accuracy, resilience, usability, continuous evolution, trustworthiness, interoperability, and reliability. Moreover, once deployed, AI systems must be continuously monitored and governed to safeguard user safety and societal well-being.</t>
      <t>As such, aspects such as data management, computational resources, connectivity, security, privacy, trust, billing, and rigorous testing are all crucial when handling AI systems. Thus, it is important to clearly understand the requirements of the AI systems. In this document, we focus on three exact use cases, namely training, inference, and agentic interaction and the requirements are derived from the perspectives of these entangled applications that shall guide the design of the framework.</t>
      <t>In what follows, we propose a unified, intelligent network architecture called the "Data and Agent aware-Inference and Training Network" (DA-ITN). This ecosystem is envisioned as a comprehensive, multi-plane network with dedicated control, data, and operations &amp; management (OAM) planes. It is designed to interconnect all relevant stakeholders, including clients, AI service providers, data providers, and third-party facilitators. Its core objective is to provide the infrastructure and coordination necessary to support an ecosystem for enabling AI of the future at scale.</t>
    </section>
    <section anchor="general-common-requirements">
      <name>General Common Requirements</name>
      <section anchor="identification">
        <name>Identification</name>
        <t>Any entity participating in an AI ecosystem must first obtain an identity. Identity serves as a foundational trust mechanism that enables an entity to be located, authenticated, authorized, verified, and held accountable. It is typically required before accessing services, initiating communication, or interacting with other entities within the ecosystem.</t>
        <t>Because AI ecosystems include different classes of participants, identity mechanisms shall align with the characteristics of each entity while maintaining interoperability across the ecosystem.</t>
        <t>For example, for human participants, identity may rely on established verification methods such as government-issued credentials and, where applicable, biometric validation. These mechanisms provide a strong association between a digital identity and a physical individual. On the other hand, for digital entities such as AI agents, tools, skills, tasks, etc, these are autonomous software entities that do not possess government-issued credentials or biometric attributes. As a result, they require a distinct form of verifiable identity suited to their software nature. An AI agent identity should uniquely distinguish the agent, support authentication and authorization, establish provenance, and enable accountability during interactions with users, systems, and other agents.</t>
        <section anchor="entity-registration-service-subscription-authentication-and-authorization">
          <name>Entity Registration, Service Subscription, Authentication and Authorization</name>
          <t>Once an entity has obtained a suitable identity, it may proceed to register within the AI ecosystem. Registration enables the entity to become discoverable, establish its presence, advertise available capabilities, and request or consume services from other participants. Registration therefore acts as the operational onboarding process that connects identity to ecosystem participation.</t>
          <t>However, registration cannot be treated as a simple enrollment process. AI ecosystems contain heterogeneous entities, including human users, organizations, devices, services, and autonomous AI agents, each with different attributes, trust requirements, and operational roles. A registration mechanism shall therefore support entity-specific metadata, capability descriptions, ownership relationships, trust indicators, and lifecycle status while maintaining interoperability across the ecosystem.</t>
          <t>A key requirement of registration is identity verification, authentication, and authorization. Before an entity is admitted into the ecosystem, the registration process shall validate the authenticity of the presented identity and confirm that the entity is authorized to register under a given role, domain, or ownership context. Authentication mechanisms establish that the entity is genuinely associated with the claimed identity, while authorization determines the permissions and privileges granted after registration. These controls are necessary to prevent impersonation, unauthorized participation, or malicious entities from entering the ecosystem.</t>
          <t>The management of registration information also introduces governance challenges. A trusted authority, federated registry, or distributed trust framework may be required to maintain registration records, authenticate entities, enforce authorization policies, and ensure consistency across administrative domains. The governance model directly impacts scalability, interoperability, trust distribution, and privacy preservation within the ecosystem.</t>
          <t>In addition to registration, entities may subscribe to ecosystem-wide services such as AI model training, data governance, auditing, or trust management. Entities may also subscribe to services provided by other registered participants. Subscription mechanisms therefore require a structured means to discover services, authenticate requests, authorize access rights, define usage policies, and manage service relationships over time.</t>
          <t>Together, registration and subscription establish the operational foundation for ecosystem participation. The framework shall define how entities authenticate, obtain authorization, register capabilities, manage subscriptions, update permissions, and terminate participation while preserving trust, accountability, and interoperability.</t>
        </section>
        <section anchor="communication-between-interacting-entities">
          <name>Communication Between Interacting Entities</name>
          <t>Once entities are identified, registered, and authorized to participate, the ecosystem must support communication among them. Communication forms the operational foundation through which entities exchange information, request services, coordinate actions, and establish collaborative workflows.</t>
          <t>AI ecosystems introduce highly heterogeneous communication patterns. Human users interact with AI agents through natural language and application interfaces; AI agents communicate with tools, services, and compute resources; agents may exchange information with data owners, sensors, or external knowledge sources; robots and physical devices interact with digital platforms; and infrastructure providers communicate with consuming entities to allocate processing, storage, or networking resources. These interactions differ in protocol, timing, trust requirements, and communication modality.</t>
          <t>Given this diversity, a unified communication infrastructure is required to support interoperable interaction across heterogeneous entities. Such an infrastructure shall not mandate a single communication protocol, but instead provide a common framework capable of supporting multiple interaction models, including request-response, event-driven communication, streaming, publish-subscribe, and autonomous machine-to-machine coordination.</t>
          <t>Interoperability is a critical requirement. Communication mechanisms must function across administrative domains, organizational boundaries, private intranets, and the public internet. Entities operating in separate ecosystems or trust domains should still be capable of establishing secure and understandable interactions without requiring tightly coupled implementations.</t>
          <t>The communication framework must also operate across multiple architectural layers. At the application layer, it should support semantic interoperability, service discovery, and protocol translation between heterogeneous entities. At lower layers, it should integrate with transport, routing, and network-level mechanisms to ensure reliable connectivity, scalability, and performance. This layered approach enables communication continuity from high-level AI workflows down to the underlying network infrastructure.</t>
          <t>A unified communication framework therefore becomes essential not only for message exchange, but also for enabling interoperability, coordination, trust establishment, and scalable interaction across the full AI ecosystem.</t>
        </section>
      </section>
    </section>
    <section anchor="training-service-specific-requirements">
      <name>Training Service specific Requirements</name>
      <t>AI model training is the foundational process through which an AI system (e.g., Machine learning Model, LLM, or AI agent) learns to perform tasks by analyzing data and adjusting its internal parameters to minimize performance errors. At its core, this process involves feeding input data into a model, and applying optimization algorithms to iteratively refine the model’s performance. As such, the training process involves creating rendezvous point where data, compute, and AI models can interact.</t>
      <section anchor="centralized-versus-decentralized-training">
        <name>Centralized versus Decentralized Training</name>
        <t>It is clear from the above that no matter how advanced the model architecture may be, the success of any training process ultimately hinges on two fundamental components: the model and the data. While the model itself is often developed and hosted in a centralized location—typically within the secure infrastructure of the model owner or designer—data is inherently distributed. The training data might originate from sensors, devices, logs, events, documents, and other diverse sources spread across different geographies and domains. To be exact, whether due to geographic dispersion, organizational silos, privacy constraints, or edge-device generation, data rarely exists in a single, clean repository.</t>
        <t>Today, model training can happen in one of two ways or a combination thereof: centralized or decentralized. In centralized training, thanks to the development of robust data collection techniques and high-throughput connectivity networks, it is now feasible to collect data and bring it to where the model training would occur. On the other hand, a more recent paradigm known as model-follow-data has emerged, advocating for the reverse: rather than transporting large volumes of potentially sensitive data to a central location, the model is dispatched to where the data resides—enabling distributed or federated training.</t>
        <t>Accordingly, to facilitate the training process, rendezvous points scheduling, whether centralized (data is collected and shipped to where the model is) or decentralize (model is shipped to where the data is),  between distributed data, compute and storage resources, and AI models awaiting training needs to be arranged and managed, which is fundamental for successful model training. However, this scheduling process introduces a number of challenges spanning privacy, trust, utility, and computational and connectivity resources management. Moreover, as AI adoption accelerates, both centralized and decentralized approaches will drive increasing pressure on underlying connectivity infrastructure. Therefore, to ensure scalable, efficient, and cost-effective AI training, it is vital to implement intelligent mechanisms for managing data and model movement, selecting relevant subsets for training, and minimizing unnecessary transfers.</t>
        <t>In the sections that follow, we explore the architectural and operational requirements needed to support this vision and lay the foundation for a high-performance, AI-native training ecosystem.</t>
      </section>
      <section anchor="requirements-breakdown">
        <name>Requirements Breakdown</name>
        <t>Consider a number of AI model training clients awaiting training service. An AI model training client is a user with a new or a pre-trained model who wishes to train or continue training their AI model using data that can be found in the data corpus. The data corpus (the global dataset), as has been previously established, consists of a group of datasets that are distributed across various geographical locations. The AI model training client requires access to this data either in a centralized or distributed manner.</t>
        <section anchor="data-collectionmodel-dispatching">
          <name>Data Collection/Model Dispatching</name>
          <t>As previously discussed, data is inherently distributed. In centralized training paradigms, this data must be transferred from its sources to centralized locations where model training occurs. Consider a scenario involving multiple AI model training clients, each awaiting centralized training of their AI model. Each client is interested in a particular data set that is sufficient for the intended training objective. Aggregating large volumes of data from geographically dispersed sources to the  centralized server of each client introduces several significant challenges:</t>
          <ul spacing="normal">
            <li>
              <t>Communication Overhead: The sheer volume of data to be transmitted can place substantial strain on the underlying transport networks, resulting in increased latency and bandwidth consumption.</t>
            </li>
            <li>
              <t>Redundant Knowledge Transfer: Despite originating from different sources, data sets may carry overlapping or identical knowledge content. Transmitting such redundant content leads to unnecessary duplication, wasting resources without providing additional training value.</t>
            </li>
            <li>
              <t>Timely Delivery: In certain applications, the freshness of data is critical. Delays in transmission can degrade the value of the information, as these applications are sensitive to the Age of Information (AoI)—the time elapsed since data was last updated at the destination.</t>
            </li>
            <li>
              <t>Multi-Modal Data Handling: Data often exists in various formats—such as text, images, audio, video, etc—each with distinct transmission requirements. Ensuring accurate and reliable delivery of these diverse data types necessitates differentiated Quality of Service (QoS) levels tailored to the characteristics and sensitivity of each modality.</t>
            </li>
            <li>
              <t>Heterogeneous Access Media: Data may reside across diverse communication infrastructures—for example, some data may be accessible only via 3GPP mobile networks, while other data may be confined to wireline networks. Coordinating data collection across these heterogeneous domains, while maintaining synchronization and consistency, presents a significant operational challenge.</t>
            </li>
          </ul>
          <t>Importantly, many of these challenges are alleviated in decentralized training frameworks, where data remains local to its source and is not transferred over the network. Instead, the model itself is distributed to the various data locations. However, this alternate paradigm introduces its own set of unique challenges.</t>
          <t>As previously noted, modern AI models are growing increasingly large in size. In decentralized training, it is often necessary to replicate the AI model that require training and transmit the copies to multiple geographically dispersed data sites. This results in a different but equally significant set of logistical and technical hurdles:</t>
          <ul spacing="normal">
            <li>
              <t>Communication Overhead: While data transfer is avoided, dispatching large model files across the network to multiple destinations can still impose substantial load on communication infrastructure, particularly in bandwidth-constrained environments.</t>
            </li>
            <li>
              <t>Redundant Knowledge Transfer: Data residing at different locations may share overlapping knowledge content. Sending models to multiple sites with redundant knowledge content leads to inefficient use of network resources. In some cases, even when knowledge content is only partially redundant, it may be more efficient—considering communication cost—to forego marginal training benefits in favor of reduced overhead.</t>
            </li>
            <li>
              <t>Timeliness and Data Freshness: In certain applications, the Age of Information (AoI) remains critical. Prioritizing model dispatch to data sources with soon-to-expire or time-sensitive information is essential to maximize the utility of training and to maintain up-to-date model performance.</t>
            </li>
          </ul>
        </section>
        <section anchor="dataset-advertisement-and-discovery">
          <name>Dataset Advertisement and Discovery</name>
          <t>Given the distributed nature of data, there must be a mechanism through which data owners can advertise information about their datasets to AI model training clients. This requires the ability to describe the characteristics of the data—such as its knowledge content, quality, size, and Age of Information (AoI)—in a way that allows AI clients to discover and evaluate whether the data aligns with their training objectives. Training objectives can be one or more of: target performance, convergence time, training cost, etc.</t>
          <t>Crucially, the dataset discovery process may need to operate across multiple network domains and heterogeneous communication infrastructures. For example, an AI training client operating over a wireline connection may be interested in data residing on a 3GPP mobile network. This raises an important question: How can data owners effectively advertise their datasets in a way that is discoverable across diverse domains?
To enable such cross-domain data visibility and discovery, the following key requirements shall be considered:</t>
          <ul spacing="normal">
            <li>
              <t>Dataset Descriptors: These are metadata objects used by dataset owners to reveal essential information about their datasets to AI clients. Effective dataset descriptors must be self-contained, privacy-preserving, and informative enough to support decision-making by training clients. They should allow dataset owners to selectively disclose details about their data—such as type, relevance, quality metrics, freshness, and perhaps cost of utility—while concealing sensitive or proprietary information (privacy preservation). Data descriptors also need to be easily modified as dataset can be dynamic, and the change in dataset needs to be effectively reflected into the dataset descriptions. To ensure interoperability, dataset descriptors can either follow a standardized format or adopt a flexible but well-defined structure that enables consistent interpretation across different systems and domains.</t>
            </li>
            <li>
              <t>Dataset Discovery Mechanisms: The dataset discovery refers to the processes by which AI training clients locate and identify datasets across potentially vast and heterogeneous environments. An effective discovery mechanism should support global-scale searchability and cross-domain operability, allowing clients to find relevant datasets regardless of where they reside or which communication infrastructure they are accessible through. Discovery protocols may be standardized within specific domains (e.g., mobile networks, IoT platforms) or designed to function interoperable across multiple domains, enabling seamless integration and visibility. It should also be highlighted that, discovery mechanisms should be considerably up-to-date with the changes that would occur as the underlying data changes dynamically.</t>
            </li>
            <li>
              <t>Dataset Relationship Maps: Training often requires identifying groups of datasets that collectively meet specific requirements. Evaluating each dataset in isolation may be insufficient. Instead, a mechanism is needed to establish relationships among datasets, enabling AI training clients to assemble the appropriate combination of data for their tasks. These relationships can be envisioned to look like maps or topologies. This is a crucial step as, if an AI model client was not able to find the right dataset that satisfies its requirements, the client might choose not to submit the model for training at this time which may reduce resource wastage from the get-go.</t>
            </li>
            <li>
              <t>Timely reporting: Given the dynamic nature of data availability, characteristics, and accessibility, it is essential to have advertisement mechanisms that can promptly reflect any changes. Real-time or near-real-time updates ensure that the AI training process remains aligned with the most current data conditions, thereby maximizing both effectiveness and accuracy. Timely reporting helps prevent training on outdated or irrelevant data and supports optimal decision-making in model selection and training pipeline configuration.</t>
            </li>
          </ul>
          <t>Additionally, it should be highlighted that in AI training, discovering dataset alone is not enough. For instance, third-party resources like compute and storage are essential, and the providers of those resources must be able to advertise their capabilities so AI clients can locate and utilize them effectively. Just like with data, resource discovery requires descriptors, multi-domain accessibility, and timely updates to support seamless coordination between models, data, and infrastructure.
It should be highlighted that data and resource discovery is essential in both centralized and decentralized training, as both can be done on third party infrastructure.</t>
          <t>It is worth mentioning here that the data discovery process is a typical discovery problem as defined by draft-akhavain-moussa-dawn-problem-statement-00 and is a clear use case with requirements as defined in draft-king-dawn-requirements-01.</t>
        </section>
        <section anchor="handling-mobility-and-service-continuity">
          <name>Handling Mobility and Service Continuity</name>
          <t>In some decentralized training applications, AI models are designed to traverse a predefined route, training on multiple datasets in a sequential or federated manner. This introduces the need to manage model mobility. However, the underlying data landscape is often dynamic—new data is continuously generated, existing data may be deleted, or datasets may be relocated to different nodes or domains.</t>
          <t>As a result, enabling reliable model mobility in such a fluid environment requires robust mobility management mechanisms. For instance, while a model is en-route to a specific data location for training, that dataset may be moved elsewhere. In such cases, the model must either be re-routed to the new location or redirected to an alternative dataset that satisfies similar training objectives.</t>
          <t>Additionally, since training occurs on remote compute infrastructure and can be time-intensive, unexpected resource shutdowns or failures may interrupt the process. These interruptions can lead to service discontinuity, which must be addressed through mechanisms such as checkpointing, fallback resource selection, or dynamic rerouting of model or data to maintain training progress and system reliability.</t>
          <t>Additionally, model mobility may involve training on datasets that are distributed across heterogeneous communication infrastructures. Some infrastructures, such as emerging 6G networks, offer built-in mobility support—for example, when data resides on mobile user equipment (UE), its location can be tracked using native features of the network. However, such mobility handling capabilities may not exist in other infrastructures, such as traditional wireline networks or legacy systems, making seamless model movement and data access more challenging in those environments.</t>
        </section>
        <section anchor="privacy-trust-and-data-ownership-and-utility">
          <name>Privacy, Trust, and Data Ownership and Utility</name>
          <t>Privacy and trust are mutual responsibilities between data owners and model owners and shall be protected. Granting clients access to data for training and knowledge building should be a regulated process, with mechanisms to track data ownership and future use. Initial discussions on this topic have taken place in forums such as the AI-Control Working Group.</t>
          <t>Equally important is ensuring that model owners are protected from data poisoning. They must have confidence that the datasets they use are accurately described and not misrepresented. If data owners provide false metadata—intentionally or otherwise—model owners may unknowingly train on unsuitable or harmful datasets, leading to degraded model performance. To safeguard both parties, innovative verification and enforcement mechanisms are needed. Technologies like blockchain could offer potential solutions for establishing trust and accountability, but further research and exploration are necessary to develop practical frameworks.</t>
        </section>
        <section anchor="testing-and-performance-management">
          <name>Testing and Performance Management</name>
          <t>Another critical aspect of training is testing and performance evaluation, typically carried out using a separate subset of the data known as the testing dataset. This dataset is not used to update the model’s weights but to assess its performance on unseen samples. In centralized training, this process is straightforward because all data resides in a single, accessible location, making it easy to partition the dataset into training and testing subsets. However, in distributed training environments, where data is spread across multiple locations or devices, creating a representative and unbiased testing dataset without aggregating the data centrally becomes a major challenge. Developing effective, privacy-preserving methods for testing in such settings requires innovative solutions</t>
        </section>
        <section anchor="training-service-qos-guarantee">
          <name>Training Service QoS Guarantee</name>
          <t>Beyond ensuring traditional Quality of Service (QoS) for data transmission, a new dimension of QoS must be considered—the QoS of training itself. In AI training workflows, it is crucial to guarantee that key performance indicators (KPIs) related to training, such as accuracy convergence, training time, and resource utilization, are met consistently. This raises several important questions:
* How can these training KPIs be guaranteed in dynamic or distributed environments?</t>
          <ul spacing="normal">
            <li>
              <t>What mechanisms can be used to monitor and track training performance in real time?</t>
            </li>
            <li>
              <t>Should AI training be treated like best-effort traffic, where no guarantees are made and resources are allocated as available?</t>
            </li>
            <li>
              <t>Should training tasks receive prioritized or differentiated service levels, similar to high-priority traffic in traditional networks?</t>
            </li>
          </ul>
          <t>Addressing these questions is essential to ensure predictable and reliable AI model development, especially as training workloads grow in complexity and scale. It may require introducing new QoS frameworks tailored specifically to the needs of AI training systems.</t>
        </section>
        <section anchor="charging-and-billing">
          <name>Charging and Billing</name>
          <t>The AI training process involves a diverse ecosystem of stakeholders, including data owners, model owners, and resource providers. Each of these parties plays one or more vital roles in enabling successful training workflows.</t>
          <t>For example, communication providers contribute not only by transporting dataset and models across the network but also they themselves may also serve as data providers. This is particularly evident in the emerging design of 6G networks, which integrate sensing capabilities with communication infrastructure. As a result, 6G operators are uniquely positioned to offer both connectivity and data, making them central players in the training pipeline.</t>
          <t>Despite their different roles, all parties contribute to enabling AI training as a service, a complex and resource-intensive process that is far from free. Therefore, it is essential to establish a robust charging and billing framework that ensures each participant is fairly compensated based on their contribution.</t>
          <t>Several open questions arise in this context:</t>
          <ul spacing="normal">
            <li>
              <t>Should training services follow a prepaid model, or adopt a pay-per-use structure?</t>
            </li>
            <li>
              <t>Should there be tiered service offerings, such as gold, silver, and platinum, each providing different levels of performance guarantees or priority access?</t>
            </li>
            <li>
              <t>How should these tiers be defined and enforced in terms of service quality, resource allocation, and response time?</t>
            </li>
          </ul>
          <t>Developing fair, transparent, and scalable billing mechanisms is critical to facilitating collaboration across stakeholders and sustaining the economic viability of distributed AI training ecosystems. These challenges call for further research into incentive structures, dynamic pricing models, and smart contract-based enforcement, especially in scenarios involving cross-organizational or cross-network cooperation.</t>
        </section>
      </section>
    </section>
    <section anchor="inference">
      <name>Inference</name>
      <t>Inference is critical because it represents the phase where the model begins to deliver practical value. Unlike training, which is typically, a one-time or periodic, resource-intensive process, inference often needs to operate continuously and efficiently, sometimes in real-time. Although inference is a less resource-intensive process, it has strict requirements that govern its success. While a single AI inference might be lightweight and fast, serving many users, with many inference requests, demands significant hardware resources andposes serious scalability challenges. In what follows, we explore these requirements that shall enable a successful AI inference ecosystem.</t>
      <section anchor="requirement-breakdown">
        <name>Requirement Breakdown</name>
        <t>We envision an inference ecosystem composed of a large number of pre-trained AI models (or agents) distributed across a geographical location. These models are capable of performing a wide range of tasks, such as image classification, language translation, or speech recognition. Some models may specialize in the same task but vary in performance, accuracy, latency, or resource demands. This diverse pool of models is accessed by numerous inference clients (users or applications) who submit inputs, referred to as queries, and receive task-specific outputs.</t>
        <t>These queries can vary greatly in complexity, structure, and modality, with some requiring the cooperation of multiple models to fulfill a single request. The overarching goal of the ecosystem is to efficiently match incoming queries with the most suitable models, ensuring accurate, timely, and resource-aware responses. Achieving this requires intelligent orchestration, load balancing, and potentially dynamic model selection based on factors such as performance, availability, cost, and user-specific requirements. In what follows, we discuss the various aspects of this ecosystem and discuss the different requirements needed for its success.</t>
        <section anchor="model-deployment-and-mobility">
          <name>Model Deployment and Mobility</name>
          <t>The first step toward building a successful AI inference ecosystem is the optimal deployment of trained AI models (or AI agents). In this context, optimality refers to both the physical or network location of the model and the manner in which it is deployed. AI models vary significantly in size and resource requirements—ranging from lightweight models that are only a few kilobytes to large-scale models with billions of parameters. This wide range makes deployment decisions critical to achieving both efficient performance and effective resource utilization. Also, a unique factor to AI models/agents is the fact that they are software components that are not bounded to a certain hardware. They can be deleted, copied, moved, or split across multiple compute locations. All these unique aspects provide flexibility in design if the real-time status of the underlying network dynamics and resources is made accessible. As such, the following aspects must be taken into account when handling model deployment and mobility:</t>
          <ul spacing="normal">
            <li>
              <t>Choosing the right facility to host a model: whether it's a lightweight edge device, a local server, or a high-performance cloud data center, deployment will depend on the model's size, computational requirements, and expected query volume. For example, smaller models might be best suited for deployment on edge devices closer to users, enabling low-latency responses. In contrast, larger models may require centralized or specialized infrastructure with high compute and memory capacity.</t>
            </li>
            <li>
              <t>Load balancing: Once models are deployed, inference traffic begins to flow, with users or applications sending queries to the appropriate agents. If not managed properly, this traffic can lead to congestion, creating bottlenecks that degrade inference performance through increased latency or dropped requests. To avoid such scenarios, models should be deployed strategically to distribute the load, ensuring smooth operation. Traditional load balancing techniques can be employed to redirect traffic away from overburdened nodes and towards underutilized ones. However, more sophisticated strategies may involve replicating models and placing these replicas closer to regions with high query demand, thereby minimizing latency and easing network traffic engineering challenges.</t>
            </li>
            <li>
              <t>Mobility-aware deployment: the dynamic nature of inference traffic necessitates mobility-aware deployment. For instance, consider a large data center acting as a centralized inference hub, hosting numerous models and handling a significant volume of queries. Over time, this hub may experience traffic overload. In such cases, migrating certain models to alternative locations can help alleviate pressure. However, model migration is not without its challenges—particularly if a model is actively serving queries at the time of migration. In such situations, mobility handling mechanisms must be in place to ensure seamless service continuity. These mechanisms could involve session handovers, temporary state preservation, or model version synchronization, all designed to maintain uninterrupted service during the migration process.</t>
            </li>
          </ul>
          <t>In summary, optimal model deployment requires careful consideration of model size, resource needs, query distribution, and real-time adaptability. Achieving this lays the foundation for a responsive, scalable, and resilient AI inference ecosystem.</t>
        </section>
        <section anchor="ai-model-ai-agent-discovery-and-description">
          <name>AI Model (AI Agent) Discovery and Description</name>
          <t>Just as data descriptors and discovery mechanisms are essential during the training phase, AI model inference clients also require a robust discovery mechanism during the inference stage. In an ecosystem populated by a large and diverse pool of models—each with unique capabilities and specializations—clients are presented with significant flexibility and choice in selecting the most suitable models for their queries. However, to make informed decisions, clients must have access to information that enables them to distinguish between models based on criteria such as performance, specialization, availability, and resource requirements.</t>
          <t>The AI model discovery process becomes even more complex when it needs to function across multiple network domains and heterogeneous communication infrastructures. For instance, a client connected via a wireline network might need to interact with a model deployed on a mobile 3GPP network. Such scenarios raise a critical question: How can model owners advertise their models in a way that ensures discoverability and interoperability across diverse domains?</t>
          <t>Addressing this challenge requires the development of standardized model advertisement and discovery protocols that can operate seamlessly across infrastructure boundaries. These protocols must accommodate differences in network technology, latency constraints, and security requirements while providing consistent and reliable access to model information. Ensuring cross-domain discoverability is crucial to unlocking the full potential of a globally distributed inference ecosystem.</t>
          <t>To enable such cross-domain AI model visibility and discovery, the following key requirements must be considered:</t>
          <ul spacing="normal">
            <li>
              <t>AI Model Descriptors: These are metadata objects used by model owners to reveal essential aspects about their datasets to AI inference clients. Effective data descriptors must be self-contained, privacy-preserving, and informative enough to support decision-making by inference clients. They should allow model owners to selectively disclose details about their model—such as skills, performance reviews, trust level, relevance, quality metrics, freshness, and perhaps cost of utility—while concealing sensitive or proprietary information. To ensure interoperability, model descriptors can either follow a standardized format or adopt a flexible but well-defined structure that enables consistent interpretation across different systems and domains.</t>
            </li>
            <li>
              <t>AI Model Discovery Mechanisms: These refer to the processes by which AI inference clients locate and identify models/agents across potentially vast and heterogeneous environments. An effective discovery mechanism should support global-scale searchability and cross-domain operability, allowing clients to find relevant model/agents regardless of where they reside or which communication infrastructure they are accessible through. Discovery protocols may be standardized within specific domains (e.g., mobile networks, IoT platforms) or designed to function interoperable across multiple domains, enabling seamless integration and visibility.</t>
            </li>
            <li>
              <t>AI Model relationship maps: As queries may requiring the collaboration between multiple models/agents, relationships between models/agents with respect to different task might present useful tools as to help clients choose the appropriate subset of models/agents that would handle their queries.</t>
            </li>
            <li>
              <t>Timely Reporting: Similar to AI datasets, the status of an AI model can change over time—for example, due to shifts in workload or resource availability. It is important that such changes are reported promptly and accurately, allowing clients to make informed decisions based on the model’s current state. This is essential for ensuring efficient model selection and maintaining high-quality, reliable inference outcomes.</t>
            </li>
          </ul>
          <t>It is important to emphasize that AI model discovery differs fundamentally from data discovery. While data are passive objects that require external querying or manipulation, AI models are intelligent, autonomous entities capable of making decisions based on their own capabilities, status, and context. This distinction opens up new and more dynamic possibilities for how models are discovered and engaged in an inference ecosystem.</t>
          <t>In traditional data discovery, clients search for and retrieve relevant datasets based on metadata or predefined criteria. However, in the case of model discovery, the process can be much more interactive and flexible. One approach involves the client actively discovering models by querying a directory or registry using model descriptors. Based on these descriptors, the client selects one or more models to handle a specific inference task. However, given that models can reason and act independently, model discovery does not have to be limited to client-driven selection. An alternative approach is to reverse the flow of interaction. Instead of clients seeking out models, they can publish their tasks to a shared task pool, accessible to all available models. These tasks include descriptors that define the type of work to be done, expected outputs, and quality-of-service requirements. Models can then autonomously scan this pool, evaluate whether they are well-suited for specific tasks, and choose to express interest in executing them. This self-selection process allows models to play an active role in task matching, improving system scalability and efficiency.</t>
          <t>The final assignment of a task can be handled in different ways. Clients may retain full control and approve or reject interested models based on their preferences or priorities. Alternatively, the system may operate in a fully autonomous mode, where tasks are assigned automatically to the first or best-matching model, without requiring client intervention—depending on the client's chosen policy.</t>
          <t>This agent-driven paradigm reflects the shift toward more decentralized and intelligent AI ecosystems, where models are not merely passive computation endpoints but active participants in task negotiation and resource allocation. Such a system not only enhances scalability and flexibility but also allows for more efficient utilization of the available model pool, especially in heterogeneous and dynamic environments.</t>
          <t>It is worth mentioning here that the data discovery process is a typical discovery problem as defined by draft-akhavain-moussa-dawn-problem-statement-00 and is a clear use case with requirements as defined in draft-king-dawn-requirements-01.</t>
        </section>
        <section anchor="query-and-inference-result-routing">
          <name>Query and Inference Result Routing</name>
          <t>A significant challenge in AI inference networks lies in efficiently routing client queries to the appropriate inference models and ensuring the corresponding results are reliably delivered back to the client. This becomes particularly complex in scenarios involving mobility and multi-domain environments, where both the client and the model may exist across different types of network infrastructures. The key challenges and considerations include:</t>
          <ul spacing="normal">
            <li>
              <t>Query Routing Across Heterogeneous Networks: When a client accesses the inference ecosystem through a mobile network such as 3GPP 6G, and the target model is hosted in a wireline or cloud-based infrastructure, routing the query across these distinct domains is non-trivial. Differences in network architecture, protocols, and service guarantees complicate the end-to-end flow.</t>
            </li>
            <li>
              <t>Mobility Management During Inference Execution: While mobile networks like 6G are designed to handle user mobility, inference tasks may take time to process—particularly when using large models or performing complex computations. During this time, the client may change physical location, switch devices, or even go offline. Ensuring that inference results can still reach the client under these dynamic conditions poses a significant challenge.</t>
            </li>
            <li>
              <t>Handling Client State Changes: If a client becomes idle or disconnects entirely during inference, the system must decide what to do with the completed result. Should it be queued, buffered, forwarded to another linked device, or simply discarded? A robust mechanism is needed to track client state, maintain context, and guarantee result delivery or at least graceful degradation.</t>
            </li>
            <li>
              <t>Support for Live and Streaming Inference: Some use cases, such as real-time audio transcription, involve live streaming of data from the client to the model and vice versa. These sessions require sustained, low-latency connections and are particularly sensitive to interruptions caused by mobility or handoffs between networks. Ensuring session continuity and maintaining streaming quality across network boundaries is a complex but critical aspect of real-world inference deployments.</t>
            </li>
            <li>
              <t>Cross-Domain Connectivity and Session Management: The involvement of multiple network operators and domains introduces questions around interoperability, session tracking, and handover coordination. There is a need for intelligent infrastructure capable of end-to-end session management, including maintaining metadata, context, and service quality as the session traverses’ different networks.</t>
            </li>
          </ul>
        </section>
        <section anchor="inference-chainingcollaborative-inference">
          <name>Inference Chaining/Collaborative Inference</name>
          <t>Another critical aspect of an AI inference ecosystem is the need for model collaboration to fulfill complex or multi-faceted tasks. Not all inference requests can be handled by a single model; in many cases, collaboration between multiple models is necessary. Effectively managing this task-based collaboration is essential to ensure accurate, efficient, and scalable inference services. Model collaboration can take several distinct forms:</t>
          <ul spacing="normal">
            <li>
              <t>Inference Chaining: In this model, the output of one model serves as the input to the next in a sequential pipeline. Each model performs a specific stage of the task, and the final result—produced by the last model in the chain—is returned to the client. This is common in multi-stage tasks such as image processing followed by object detection and then classification.</t>
            </li>
            <li>
              <t>Parallel Inference: Here, a complex task is decomposed into multiple subtasks, each of which is assigned to a specialized model. These models operate concurrently, and their outputs are aggregated to form a unified inference result. This approach is particularly useful when dealing with large data sets or when a task spans different domains of expertise.</t>
            </li>
            <li>
              <t>Hierarchical inference: A model is assigned as a task manager and is responsible for delegating tasks to service models</t>
            </li>
            <li>
              <t>Collaborative Inference: In this more dynamic and decentralized form, the task is assigned to a group of models that are capable of discovering one another, assessing their respective capabilities, and coordinating among themselves to devise a shared strategy for completing the task. This model requires more sophisticated communication, negotiation, and orchestration mechanisms.</t>
            </li>
          </ul>
          <t>Regardless of the collaboration format, the success of such multi-model interactions depends on the availability of a robust management infrastructure. This infrastructure must enable seamless coordination between models, even when:</t>
          <ul spacing="normal">
            <li>
              <t>The models are hosted by different providers,</t>
            </li>
            <li>
              <t>They are deployed across heterogeneous communication networks,</t>
            </li>
            <li>
              <t>They use varying protocols, or</t>
            </li>
            <li>
              <t>They have differing performance characteristics.</t>
            </li>
          </ul>
          <t>Such a management system must abstract away the underlying complexities and provide standardized interfaces, discovery mechanisms, communication protocols, and coordination frameworks that allow models to interact effectively. Without this, collaborative inference would be brittle, inefficient, or impossible to scale. In essence, the ability to orchestrate model collaboration across diverse environments is a cornerstone of a flexible, intelligent, and robust AI inference ecosystem.</t>
        </section>
        <section anchor="compute-and-resource-management">
          <name>Compute and Resource Management</name>
          <t>In many scenarios, the compute infrastructure used to host and run inference models is managed by third-party providers, not the model owners themselves. These compute providers are responsible for meeting the Quality of Service (QoS) levels agreed upon with the model owners—such as latency, uptime, throughput, and reliability.</t>
          <ul spacing="normal">
            <li>
              <t>Ensuring these service levels are consistently met raises the question of accountability. If performance degrades due to compute resource issues—such as overloaded hardware or network outages—who is responsible for the failed inference tasks?</t>
            </li>
            <li>
              <t>There must be clear, enforceable service-level agreements (SLAs) that define roles, responsibilities, and penalties for non-compliance.</t>
            </li>
            <li>
              <t>Mechanisms for performance monitoring, auditing, and dispute resolution need to be integrated into the ecosystem to make such arrangements viable and trustworthy.</t>
            </li>
          </ul>
        </section>
        <section anchor="privacy-preservation-and-security">
          <name>Privacy Preservation and Security</name>
          <t>While models are the intellectual property of their owners, they may operate on infrastructure owned by others. This raises significant concerns around privacy and intellectual property protection.</t>
          <ul spacing="normal">
            <li>
              <t>Sensitive model details such as architecture, weights, and optimization strategies must be protected from exposure or reverse engineering by untrusted compute hosts.</t>
            </li>
            <li>
              <t>Techniques such as secure computing, encrypted model execution, and remote attestation protocols may be necessary to ensure that models run securely without revealing proprietary details.</t>
            </li>
            <li>
              <t>Model owners must also be assured that inference inputs and outputs remain confidential, particularly in applications involving personal or sensitive data.</t>
            </li>
          </ul>
        </section>
        <section anchor="utility-handling-and-qos-requirements">
          <name>Utility Handling and QoS Requirements</name>
          <t>Utility handling refers to the regulation, protection, and fair governance of how models are used, accessed, and monitored throughout the ecosystem. This encompasses several critical questions:</t>
          <ul spacing="normal">
            <li>
              <t>How can we guarantee that a model deployed on remote infrastructure is not being tampered with, copied, or intentionally repurposed?</t>
            </li>
            <li>
              <t>How do we ensure that workload distribution is fair across available models, preventing monopolization by a few and giving equal visibility and opportunity to all participating models?</t>
            </li>
            <li>
              <t>What protections are in place to ensure that models are not being poisoned, exploited, or involved in illegal activities, either through malicious inputs or untrusted outputs?</t>
            </li>
            <li>
              <t>How do we ensure the integrity of inference results, so that outputs are delivered to clients without alteration, manipulation, or censorship?
Addressing these concerns may require digital rights management (DRM) for AI models, usage monitoring tools, and potentially blockchain-based logging or audit trails to ensure transparency and traceability.</t>
            </li>
          </ul>
          <t>On the other hand, the definition of Quality of Service (QoS), when it comes to inference tasks, is very broad and can take many forms. For instance, QoS could be to guarantee a certain accuracy of a response, or time of the response, or expertise level needed. We believe that the topic of QoS guarantee requires extensive studying and analysis.</t>
        </section>
        <section anchor="model-upgrade-streamlining">
          <name>Model Upgrade Streamlining</name>
          <t>AI models are not static; they undergo continuous upgrades, improvements, and fine-tuning to maintain accuracy, adapt to new data, or support evolving tasks.</t>
          <ul spacing="normal">
            <li>
              <t>The ecosystem must support seamless model versioning, including adding, removing, or modifying model agents without disrupting ongoing services.</t>
            </li>
            <li>
              <t>Updated model profiles must be instantly reflected in the discovery layer, ensuring clients always have access to the most current and accurate model descriptions.</t>
            </li>
            <li>
              <t>For large models, upgrade procedures must be efficient and bandwidth-conscious, potentially using incremental update techniques to avoid full redeployment.</t>
            </li>
            <li>
              <t>Moreover, strategies must be in place to handle hot-swapping of models, where an old model is gracefully decommissioned and replaced by a new one—without causing inference failures or data loss during the transition.</t>
            </li>
          </ul>
        </section>
        <section anchor="charging-and-billing-1">
          <name>Charging and Billing</name>
          <t>The AI inference process involves a diverse ecosystem of stakeholders, including model owners, compute providers, and communication providers. Each of these parties plays one or more vital roles in enabling successful inference workflows. Therefore, it is essential to establish a robust charging and billing framework that ensures each participant is fairly compensated based on their contribution.</t>
          <t>Several open questions arise in this context:</t>
          <ul spacing="normal">
            <li>
              <t>Should inference services follow a prepaid model, or adopt a pay-per-use structure?</t>
            </li>
            <li>
              <t>Will there be tiered service offerings—such as gold, silver, and platinum—each providing different levels of performance guarantees or priority access?</t>
            </li>
            <li>
              <t>How should these tiers be defined and enforced in terms of service quality, resource allocation, and response time?</t>
            </li>
            <li>
              <t>What about discovery framework providers? Would they be offering a free service like google search or would it be more structured?</t>
            </li>
          </ul>
          <t>Developing fair, transparent, and scalable billing mechanisms is critical to fostering collaboration across stakeholders and sustaining the economic viability of distributed AI training ecosystems. These challenges call for further research into incentive structures, dynamic pricing models, and smart contract-based enforcement, especially in scenarios involving cross-organizational or cross-network cooperation.</t>
        </section>
      </section>
    </section>
    <section anchor="framework-for-da-itn-data-and-agent-aware-inference-and-training-network">
      <name>Framework for DA-ITN (Data and Agent Aware Inference and Training Network)</name>
      <t>The DA-ITN is envisioned as a multi-domain, multi-technology network operating at the AI layer, designed to address the various layers of complexity inherent in modern AI ecosystems. As mentioned earlier, the network aims to support a wide range of requirements, some of which are outlined above, across AI training, inference, and agent-to-agent interaction.</t>
      <t>The network consists of set of nodes and equipment connected via one or more traditional underlay networks as depicted below.</t>
      <figure anchor="fig1">
        <name>Figure 1: DA-ITN nodal view</name>
        <artwork align="center"><![CDATA[
+---------------------------------------------+
| DA-ITN nodal view                           |
|                                             |
|  +----------------+     +----------------+  |    DA-ITN node types
|  | DA-ITN Node (A)|<--->| DA-ITN Node (B)|  |      A- Data node
|  +----------------+  |  +----------------+  |      B- Compute node
|                      |                      |      C- Storage node
|                      |                      |      D- Model node
|  +----------------+  |  +----------------+  |      E- Evaluation node
|  | DA-ITN Node (E)|<--->| DA-ITN Node (G)|  |      F- Agent node
|  +----------------+  |  +----------------+  |      G- Multi-purpose node
|                      |                      |
|                      |                      |
|  +----------------+  |  +----------------+  |
|  | DA-ITN Node (F)|<--->|DA-ITN Node(C+D)|  |
|  +----------------+     +----------------+  |
|                                             |
+---------------------------------------------+
]]></artwork>
      </figure>
      <t>Nodes with DA-ITN along with its core functionality interact together to provide different training, inference, and agentic services. In this manner, DA-ITN can be divided into four interacting major building blocks as shown bellow.</t>
      <figure anchor="fig2">
        <name>Figure 2: DA-ITN high level architecture and building blocks</name>
        <artwork align="center"><![CDATA[
+--------------------+         +--------------------+ 
|   DA-ITN Service   |         |   DA-ITN Client    |
| Provider Community |         |     Community      |
+--------------------+         +--------------------+
    ↑     ↑                               ↑     ↑
    |     |                               |     |
    |     |                               |     |
    |     +-------------------------------+     |
    |                     |                     |
    |                     |                     |
    |                     ↓                     |
    |           +--------------------+          |
    |           |     DA-ITN Core    |          |
    |           |                    |          |
    |           +--------------------+          |
    |                     ↑                     |
    |                     |                     |
    |                     |                     |
    ↓                     ↓                     ↓
+---------------------------------------------------+
|                    DA-ITN Enablers                |
+---------------------------------------------------+
]]></artwork>
      </figure>
      <section anchor="da-itn-core">
        <name>DA-ITN Core</name>
        <t>This block contains DA-ITN main internal modules, functions, and services. Dedicated logical
planes in this block handle interactions between its different modules and functions. Interactions between different modules and functions in this block are not visible or accessible to entities in other blocks. DA-ITN core offers its services to external entities via clear and well defined interfaces and protocols. The following illustrates different modules and functions of DA-ITN core block.</t>
        <figure anchor="fig3">
          <name>Figure 3: DA-ITN core and its different modules and function</name>
          <artwork align="center"><![CDATA[
+-----------------------------------+
|            DA-ITN Core            |
|                                   |
|   +----------+ +--------------+   |
|   | X-RCE    | |Registration &|   |     X-RCE:  Training, model, query, etc.
|   |          | |Authentication|   |             route compute engine
|   +----------+ +--------------+   |     XOD:    Model, agent deployment  
|   +----------+ +--------------+   |             optimizer
|   | X-DO     | |Discovery &   |   |     S-FAM:  Different Service feasibility
|   |          | |Advertisement |   |             assessment module
|   +----------+ +--------------+   |     TAG:    Training algorithm generator
|   +----------+ +--------------+   |     PVM:    Performance verification
|   | S-FAM    | |Billing &     |   |             Module 
|   |          | |Accounting    |   |     DDRT:   Data dynamics and resource
|   +----------+ +--------------+   |             topology
|   +----------+ +--------------+   |
|   | TAG      | |Reputation &  |   |
|   |          | |Trust Mgmt.   |   |
|   +----------+ +--------------+   |
|   +----------+ +--------------+   |
|   | PVM      | | Upgrade Mgmt.|   |
|   |          | |              |   |
|   +----------+ +--------------+   |
|   +----------+ +--------------+   |
|   | Resource | |Mobility Mgmt.|   |
|   | Mgmt.    | |              |   |
|   +----------+ +--------------+   |
|   +----------+ +--------------+   |
|   |   DDRT   | | Tools Mgmt.  |   |
|   |          | |     ???      |   |
|   +----------+ +--------------+   |
|            +---------+            |
|            |   OAM   |            |
|            +---------+            |
+-----------------------------------+
]]></artwork>
        </figure>
      </section>
      <section anchor="da-itn-service-provider-community">
        <name>DA-ITN Service Provider Community</name>
        <t>Providers for different services such as data, model, agent, and resource providers reside within the Service Provider Community block of the DA-ITN. Service providers join the network via a registration and authentication process offered by DA-ITN core. The service providers use DA-ITN to advertise their services, capabilities, etc. across the overall network. They can also register for notifications to get updates e.g. arrival of new models, training data, agents, etc. DA-ITN dispenses revenue to providers for the services rendered via its billing and accounting module.</t>
        <t>The following figure shows different modules of DA-ITN service provider community.</t>
        <figure anchor="fig4">
          <name>Figure 4: DA-ITN Service Provider Community</name>
          <artwork align="center"><![CDATA[
+-------------------------------+
|       DA-ITN Service          |
|     Provider Community        |
|                               |
|  +----------+ +----------+    |
|  | Data     | | Model    |    |
|  | providers| | providers|    |
|  +----------+ +----------+    |
|  +----------+ +----------+    |
|  | Agent    | | Resource |    |
|  | providers| | providers|    |
|  +----------+ +----------+    |
|  +--------------+             |
|  | Tools        |             |
|  | providers ???|             |
|  +--------------+             |
+-------------------------------+
]]></artwork>
        </figure>
        <t>The tool module within the provider block requires further investigation and analysis. Agentic protocols such as Model Context Protocol(MCP) provide access to MCP tools from the agent interaction point of view. Whether DA-ITN needs to support additional capabilities w.r.t agents or whether it needs to support distinct tools w.r.t training and inference is an open question for now. Will there be a need for unified tools' protocols that fits all utilities, or a protocol per utility?</t>
      </section>
      <section anchor="da-itn-client-community">
        <name>DA-ITN Client Community</name>
        <t>This block represents the client side of DA-ITN. The clients are network participants requiring training, inference, agent-to-agent interactions, and those who need access to resources such as storage, compute, etc. offered by resource providers in DA-ITN.</t>
        <t>DA-ITN enables clients to discover potential providers by tuning into DA-ITN discovery, and advertisement module, allowing them to select the best match based on their requirements. Alternatively, clients may delegate the matching process to DA-ITN, requesting DA-ITN to identify the most suitable provider based on their criteria. For example, a client using the model training service may opt to fully control the training process and make all decisions independently. Alternatively, the client can delegate the training responsibilities to the DA-ITN core. In the case of delegation, modules such as X-RCE, DDRT, PVM, S-FAM, and TAG can work collaboratively to train the model on the client’s behalf and deliver the finalized, trained model back to them.</t>
        <figure anchor="fig5">
          <name>Figure 5: DA-ITN Client Community</name>
          <artwork align="center"><![CDATA[
+-------------------------------+
|       DA-ITN Client           |
|         Community             |
|                               |
|  +----------+ +----------+    |
|  | Data     | | Model    |    |
|  | clients  | | clients  |    |
|  +----------+ +----------+    |
|  +----------+ +----------+    |
|  | Agent    | | Resource |    |
|  | clients  | | clients  |    |
|  +----------+ +----------+    |
|  +--------------+             |
|  | Tools        |             |
|  | Clients   ???|             |
|  +--------------+             |
+-------------------------------+
]]></artwork>
        </figure>
        <t>It must be noted that a node/entity in DA-ITN can act both as provider and/or a client. For example, a node providing data as its service, might need access to a resource provider service. Or a model provider enabling inference might employ the services of data providers for Retrieval-Augmented Generation (RAG).</t>
        <t>Similar to the provider community block in DA-ITN, the tools module withing the client community requires further study.</t>
      </section>
      <section anchor="da-itn-enablers">
        <name>DA-ITN Enablers</name>
        <t>This layer represents external and underlying services that DA-ITN itself employs to accomplish its different tasks. Various networking layers, access technologies, location, and sensing functions are examples of such services.</t>
        <figure anchor="fig6">
          <name>Figure 6: DA-ITN Enablers</name>
          <artwork align="center"><![CDATA[
+-------------------------------------------------------------------------+
|                             DA-ITN Enablers                             |
|                                                                         |
|  +---------------------------+  +-----------+  +-----------+            |
|  | Communications/Networking |  | Location  |  |  Sensing  |            |
|  |                           |  |           |  |           |            |
|  | +---------+  +----------+ |  | +-------+ |  | +-------+ |            |
|  | | Mobile  |  | Internet | |  | | GPS   | |  | | IoT   | |            |
|  | | network |  +----------+ |  | +-------+ |  | +-------+ |            |
|  | +---------+  +----------+ |  | +-------+ |  | +-------+ |            |
|  | | NTN     |  | WiFi     | |  | |Sensors| |  | | ISAC  | |  Others??? |
|  | +---------+  +----------+ |  | +-------+ |  | +-------+ |            |
|  | +-----------------------+ |  | +-------+ |  | +-------+ |            |
|  | |        Others?        | |  | |Mobile | |  | |Others?| |            |
|  | +-----------------------+ |  | |network| |  | +-------+ |            |
|  |                           |  | +-------+ |  |           |            |
|  |                           |  | +-------+ |  |           |            |
|  |                           |  | |Others?| |  |           |            |
|  |                           |  | +-------+ |  |           |            |
|  +---------------------------+  +-----------+  +-----------+            |
+-------------------------------------------------------------------------+
]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="da-itn-high-level-architecture">
      <name>DA-ITN high level architecture</name>
      <t>To manage these complexities and cater for the requirements, we propose structuring the DA-ITN around four core components: a Control Plane (CP), a Data Plane (DP), an Operations and Management (OAM) Plane, and an Intelligence Layer. It is important to note that the DA-ITN is agnostic to the underlying communication infrastructure, allowing it to operate seamlessly over heterogeneous networks, whether mobile, wire-line, or satellite-based. he DA-ITN integrates with these underlying infrastructures through any available means, embedding its control and intelligence capabilities to coordinate and manage AI-specific services in a flexible and scalable manner.</t>
      <section anchor="control-plane-and-intelligence-layer">
        <name>Control plane and Intelligence Layer</name>
        <t>The Control Plane and Intelligence Layer work together to enable an efficient, reliable, and timely information collection infrastructure. They continuously gather up-to-date information on data availability, model status, agent conditions, resource utilization, and reachability across all participating entities. The collected information comes in the form of dynamic descriptors for data, models, and resources, essential components for enabling intelligent, context-aware decision-making within the AI ecosystem as has previously been highlighted. Also, with the help of data, resource, and reachability topology engine (DRRT) housed within the intelligence layer, the gathered information and descriptors can be used to construct meaningful relationships across the ecosystem. These are captured in the form of dynamic topologies or map-like structures, which help optimize decision-making processes across training, inference, and agent-to-agent collaboration tasks. This design provides a continuous awareness that is very essential for the success, reliability, accuracy, and responsiveness of the AI functionalities and services enabled by the DA-ITN within the AI ecosystem.</t>
        <t>The DA-ITN control plane also lays a foundation for an advanced discovery infrastructure where the generated descriptors can be made easily accessible to all authorized participants to facilitate their required AI service For example, AI clients subscribed to training services can access up-to-date data descriptors and resource topologies, enabling them to select appropriate datasets and compute resources that align with their performance and accuracy goals. Similarly, inference clients or agents seeking collaboration can discover models based on capabilities, or submit task descriptors that enable models to respond intelligently and autonomously.</t>
        <t>Aside from descriptor collection, topology creation, and discovery, the DA-ITN control plane also supports a secure and trusted environment where clients, data providers, model providers, and resource providers can engage in AI processes without compromising integrity or accountability. It also plays a key role in managing charging, billing, and rights enforcement, ensuring that all contributors to the AI service chain are fairly compensated and protected.</t>
        <t>It is worth noting that the DA-ITN’s Control Plane is not constrained by specific protocol stacks. Instead, it provides a flexible connectivity and coordination infrastructure upon which various AI-related protocols—such as Agent-to-Agent (A2A), Model Control Protocol (MCP), or AI Coordination Protocol (ACP)—can operate. Regardless of the protocol used, implementations must meet the core DA-ITN requirements, including timely information exchange, flexible descriptor encapsulation, support for multi-model and multi-domain environments, and robust security and privacy protections. The DA-ITN is also designed to support both centralized and decentralized modes of operation, offering high adaptability across different deployment contexts.</t>
        <t>It’s also important to clarify that the Intelligence Layer encompasses all previously mentioned DA-ITN core functions, along with any additional intelligence required to support the full range of DA-ITN services. The term “Intelligence Layer” is intentionally broad to allow flexibility in its design and contents. Nonetheless, its role is clearly defined: it serves as a functional layer that interfaces with other DA-ITN components through the control plane, data plane, and OAM plane to fulfill its responsibilities.</t>
      </section>
      <section anchor="data-plane">
        <name>Data Plane</name>
        <t>On the other hand, the Data Plane of the DA-ITN provides support for mobility management and intelligent scheduling, enabling the dynamic creation of rendezvous points where data, queries, models, agents, and compute infrastructure can be brought together with minimal latency and overhead. Thanks to its infrastructure-agnostic nature, the DA-ITN leverages existing communication networks—such as those offered by 6G or edge service providers—as tools to enable model mobility, data mobility, and agent-to-agent coordination. This capability is essential for supporting scenarios where mobility or geographical dispersion of resources would otherwise lead to performance degradation or inefficiency.</t>
        <t>The construction of the Data Plane may fall under the responsibility of the DA-ITN core or Intelligence Layer, which would orchestrate the necessary resources from the DA-ITN Enabler block to build the required structure. Alternatively, the Enabler block itself may possess sufficient intelligence to autonomously construct the Data Plane as needed.</t>
      </section>
      <section anchor="operation-and-management-plane-oam">
        <name>Operation and Management Plane (OAM)</name>
        <t>Finally, the Operations and Management (OAM) layer plays a critical role in supporting the day-to-day operational needs of the AI ecosystem. This layer is responsible for a wide range of essential functions, including monitoring, registration, configuration, fault management, and lifecycle maintenance of models, data, and services. It serves as the management backbone of the DA-ITN, ensuring transparency, accountability, and operational control throughout the system.</t>
        <t>Consider the scenario of an AI model training client deploying a model into the ecosystem for training. Through the capabilities of the OAM layer, the client can continuously monitor the training performance of their model in real time—tracking key performance indicators such as convergence speed, loss metrics, resource usage, and network traversal. The model’s location within the ecosystem can be dynamically tracked, allowing clients to know exactly where their model resides or which data centers or devices it is interacting with.</t>
        <t>Moreover, the OAM layer enables interactive control. Clients can use it to adjust training parameters on the fly, such as learning rates, data sampling strategies, or the choice of collaborative partners. They can even pause, resume, or terminate the training process at will, giving them full agency over the lifecycle of their models. This flexibility is crucial in adaptive AI systems where responsiveness and real-time decision-making are valued.</t>
        <t>In this way, the OAM layer effectively functions as the control dashboard or command-line terminal of the DA-ITN-enabled AI ecosystem. Whether through a graphical user interface (GUI), APIs, or automated orchestration scripts, the OAM provides the necessary tools for fine-grained management, status visualization, and policy enforcement.</t>
        <t>Beyond individual model control, the OAM layer also facilitates system-wide coordination and policy administration. OAM in coordination with a potential policy enforcement module man help ensuring compliance with service-level agreements (SLAs), enforcing data governance policies, and managing access rights across domains. It plays a foundational role in building trustworthy, maintainable, and operationally efficient AI services across diverse infrastructure providers and stakeholders.</t>
      </section>
      <section anchor="summary-of-the-da-itn-general-framework">
        <name>Summary of the DA-ITN General Framework</name>
        <t>Accordingly, the DA-ITN is well positioned and designed to provide a range of intelligent services that can be leveraged by both AI clients and service providers. It forms the foundation for a scalable, decentralized AI internet, driving the emergence of a vibrant and cooperative agent-based ecosystem. By enabling the formation of adaptive and intelligence-driven topologies and being agnostic to the infrastructure, the DA-ITN facilitates more effective decisions in AI training, inference, and agent-to-agent interactions—ultimately supporting a more responsive, resilient, and capable AI infrastructure that can scale with future demands.</t>
        <t>In the following sections, we provide more detailed insights into the specific DA-ITN components that support training and inference services.</t>
      </section>
    </section>
    <section anchor="da-itn-for-training">
      <name>DA-ITN for Training</name>
      <t>The training architecture of the DA-ITN consists of five layers: i) the terminal layer (DA-ITN provider and client communities); ii) the network layer (Enablers); iii) the data, resource, and reachability topology layer (DRRT); iv) the DA-ITN intelligence layer (DA-ITN core); and v) the OAM layer. The layers interact together using control and data planes (CP and DP respectively) as is discussed in the following.</t>
      <t>First, the network layer, which is at the heart of the DA-ITN training system, is responsible for providing connectivity services to the four other layers. It provides both control and data plane connectivity to enable various services. The network layer connects to the terminal and DRRT layers via CP and DP links, and connects to the intelligence layer via a CP link only. The network layer also enables the overarching OMA layer by enabling a multi-layer connectivity structure.</t>
      <t>Second, the terminal layer from the point of view of training, is the lowest layer in the architecture, and it contains the terminal components of the system. These include nodes that host the training data, facilities that provide computing resources where the model can be trained, and newly proposed components that we refer to as the model performance verification modules (MPVMs), where the model testing phase takes place. It should be noted that facilities providing computing resources come in various forms including private property such as personal devices, in a distributed form such as in the case of mobile edge computing in 6G networks, on the cloud such as on the AWS cloud, or anywhere that is accessible by both the data and the model and holds sufficient compute for training. As for the MPVU, this unit is important when conducting distributed training as it takes the role of a trusted proxy node that holds a globally constructed testing dataset - the dataset is constructed via collecting sample datasets from each participating node - and provides safe and secure access to it. Last, the terminal layer also hosts the AI training clients.</t>
      <t>The terminal layer relies on the network layer to build an overarching knowledge-sharing network. To be exact, the network layer provides three main services to the terminal layer, namely: i) moving models and data between the identified rendezvous compute points where training can happen; ii) moving the models towards the MPVU units where performance evaluation can be conducted to keep track of the training progress; and iii) enabling AI training clients to submit their models, monitor the training progress, modify training requirements, and collect the trained models. Control and data traffic exist for each one of these services. For instance, moving a model toward a compute facility requires authorization for the utility of the resources; hence, authorization control data is required to be exchanged over the Terminal-NET CP links. The service also requires the physical transmission of the model to the computing facility which is handled over the Terminal-NET DP link. Similar situations can be extrapolated for the other provided services. It is worth noting that the network layer can be built on top of any access network technology including 3GPP cellular networks, WiFi, wireline, peer-to-peer, satellites, and non-terrestrial networks (NTN), or a combination of the above. These networks can be used to build dedicated CP and DP links strictly designed to enable the DA-ITN training system and its services.</t>
      <t>Third, the DRRT layer holds all the information required to make accurate decisions and sits between the intelligence layer and the terminal layer. It consists of a DRRT-manager (DRRT-M) unit which is the brain of this layer and interfaces with the other layers over CP links. The DRRT layer provides the intelligence layer with visibility and accessibility services to specific information about the underlying terminal layer's data, resource, and reachability status. To be exact, the DRRT layer holds information regarding the type, quality, amount, age, dynamics, and any other essential information about the data available for training. It also provides reachability information of the participating nodes to avoid unnecessary communication overhead and packet droppage.  Lastly, the DRRT also contains information about computing resources and MPVUs such as resource availability, location, trustworthiness, and nature of the testing datasets hosted at the different MPVF units.</t>
      <t>The DRRT relies on the network layer to collect the necessary information to build the Global-DRRT topology (G-DRRT). The G-DRRT is a none model specific topology, it is rather a large canvas that holds the high-level view of the data, resource, and reachability information. The DRRT-M unit in the DRRT layer communicates with the network layer over CP links to manage the collection process of the required information. For instance, the DRRT-M may instruct the 3GPP component of the network layer to convey connectivity information about the data nodes, or it might instruct it to wake up an ideal data provider device. It might also instruct satellites to share GPS locations of mobile data nodes. The collected data by the network layer are then shipped toward the G-DRRT component of the DRRT layer over DP links. The G-DRRT hosts intelligence that allows it to convert the collected information into useful global topology ready to provide services to the AI training clients.</t>
      <t>Fourth, The Intelligence Layer is responsible for hosting the decision-making logic required to fulfill the specific training requirements submitted by clients. It contains several key components that collaboratively determine how, where, and whether training should proceed. Among these is the Model Training Route Compute Engine (MTRCE), which identifies suitable rendezvous points between models and data. Another critical component is the Training Feasibility Assessment Module (T-FAM), which functions as an admission controller—evaluating whether a submitted model, given its requirements and constraints, can be effectively trained within the available ecosystem.</t>
      <t>Additional intelligent modules include the Training Algorithm Generator (TAG) and the Hyperparameter Optimizer (HPO). These components are responsible for selecting the appropriate training paradigm—such as reinforcement learning (RL), federated learning (FL), or supervised learning (SL)—as well as determining other configuration details like the number of training epochs, batch size, and optimization strategy. The Intelligence Layer also interfaces with both the Network Layer and the DRRT Layer to acquire the context needed for effective decision-making. From the Network Layer, it receives control data over CP links—this includes model structure, target accuracy, convergence time, monitoring instructions, and client-specified training preferences. It also receives feedback data that allows the TAG and HPO modules to refine their recommendations dynamically.</t>
      <t>Meanwhile, the Intelligence Layer connects to the DRRT Layer via both CP and DP links to access up-to-date visibility into training data, compute resources, and node reachability. This information is essential for components like MTRCE and T-FAM to make routing and admission decisions. To further enhance decision efficiency, the Intelligence Layer may also host a DRRT-Adaptability Unit (DRRT-A). This optional module works in coordination with MTRCE, T-FAM, and the DRRT Manager (DRRT-M) to generate model-specific DRR topologies—lightweight, targeted representations carved out from the global DRR topology. These customized topologies are optimized to reduce computational overhead and accelerate decision-making for individual training requests.</t>
      <t>Last, the OAM layer, which spans all the layers, is mainly intended as a management layer to configure the training components, the connectivity of the network layer, and enable feedback functions essential for progress monitoring and model localization and tracking. It is also intended to provide feedback to the clients about their submitted models every step of the way.</t>
    </section>
    <section anchor="da-itn-for-inference">
      <name>DA-ITN for Inference</name>
      <t>The Inference architecture of the DA-ITN provides automated AI inference services using a similar structure to the training architecture with a few differences.</t>
      <t>First, unlike training, where the moving components are models and training data, and the rendezvous points are computing facilities, in inference, models/agents and queries/tasks are the moving components that require networking, and the rendezvous points are model hosting facilities.</t>
      <t>Second, in inference, the clients are both the task/query owners as well as the model/agent owners. Query owners are the inference service users who send their queries into the system and collect the resulting inference. On the other hand, model owners are divided into two types. The first type consists of model hosts - the model used for inference does not have to be owned by them, but it is hosted on their computing facilities.  The second type consists of model/agent providers - they develop models/agents and deploy them either at their own facilities or at model hosts. Model owners are represented in the terminal layer as model deployment facility providers (MDFP) which are distributed across the global network.</t>
      <t>Third,  the network layer provides the following services to the terminal layer using its control and data planes: i) model mobility from model generators to model hosts; ii) query routing towards models deployed on MDFPs; iii) model mobility from one location to the other in case of load balancing situations; iv) model mobility towards re-training and calibration facilities which may be hosted on MVPF units; v) query response and inference result routing towards the query owners or any indicated destination around the globe; and vi) feedback and monitoring information to model and query owners.</t>
      <t>Fourth, the DRRT layer is replaced by a query, resource, and reachability topology (QRRT) layer. It provides the same type of services to the other layers; however, from the point of view of queries and models. That is, it provides information about both models and queries such as i) for models: model locations, model capabilities, current loading conditions, inference speed, inference accuracy, model reachability and accessibility (i.e., reachability and accessibility of the MDFP), and ii) for query: query patterns and dynamics (could be associated with a geographical location), query types, and reachability status of query owners for response communication purposes. The information collected by the QRRT is used to make appropriate decisions about model deployment and distribution strategies, query-to-model routing decisions, and response routing decisions. The QRRT has a management function that coordinates with the Network layer to collect the required information from the terminal layer to build the Global-QRRT (G-QRRT). It also optionally communicates with the QRRT-adaptation (QRRT-A) function in the inference intelligence layer to build query- or model-specific QRRTs.</t>
      <t>Last, the inference intelligence layer hosts different intelligent decision-making components including the Query Feasibility Assessment Module (Q-FAM), the Query Inference Route Compute Engine (QIRCE), and the Model Deployment Optimizer module (MDO). Just like with the training, these components make decisions based on the QRRT. For instance, the Q-FAM hosts intelligence that acts as an admission control unit that evaluates if a submitted query could be serviced given the current network inference capabilities. The QIRCE handles query routing towards the correct models while observing loading conditions. Furthermore, the MDO module acts as an admission controller for newly submitted models where it evaluates deployment feasibility based on the submitted model's architecture, compute requirements, and storage requirements. It matches these requirements to the currently available resources indicated in the QRRT and makes an admittance decision. It also handles deployment location optimization, aiming to minimize query response time and cost for inference.</t>
    </section>
    <section anchor="da-itn-facilitation-agentic-networks">
      <name>DA-ITN-Facilitation Agentic Networks</name>
      <t>While agent-to-agent interaction is commonly associated with task-oriented collaboration—often relying on inference chaining as discussed in the inference section—we propose that this only reflects one side of the coin. We believe there is a transformative alternative: collaborative agent training, where agents not only work together to complete tasks, but also contribute to each other's learning and evolution. This paradigm marks a significant shift from traditional models and positions the DA-ITN as an ideal enabler of a truly agentic future, where intelligent agents can grow, adapt, and improve continuously through structured cooperation.</t>
      <t>It is important to distinguish clearly between collaborative training and task-based collaboration. In task-based collaboration, agents exchange data or partial inferences related to the execution of a specific, external objective—such as processing a query or generating an output. Their internal models remain unchanged; they simply contribute to a shared computational goal. In contrast, collaborative training focuses on internal evolution: the goal is not to solve an external task, but to enhance the capabilities of the participating agents themselves.</t>
      <t>In a collaborative training setup, agents may exchange model parameters, training datasets, or knowledge representations. They may engage in distributed training paradigms such as federated learning, where learning happens locally and updates are shared globally, or continual learning, where agents adapt over time based on new experiences. They may also employ knowledge distillation or transfer learning, where more advanced "teacher agents" guide "student agents" through structured training programs.
One can even envision a highly dynamic and autonomous system where agents attend “agent schools”—virtual environments where they gather to learn, be tested, and graduate. In this imagined scenario, teacher agents would be responsible for training student agents, evaluating their performance, and possibly issuing certifications or verifiable credentials that guarantee the agent’s competencies and readiness for deployment. These credentials serve trust foundations in the broader agent ecosystem, ensuring that certified agents can be reliably selected and trusted by inference clients or other agents.</t>
      <t>To support such a vision, a wide range of new functional and technical requirements must be addressed. These include secure model sharing, certification and validation infrastructure, identity management, trust negotiation, resource discovery for training, and scheduling of learning sessions. Fortunately, many of these requirements align naturally with the capabilities and components of the DA-ITN architecture—including its support for mobility, discovery, descriptor sharing, trust enforcement, dynamic rendezvous, and topology management.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>Security considerations are as outlined within the document under the privacy and security requirements</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
    <section anchor="conclusions">
      <name>Conclusions</name>
      <t>As AI continues to evolve and integrate into every facet of modern life, it becomes increasingly clear that the supporting infrastructure must evolve with it. The training and inference processes—central to the success of AI—are no longer simple, isolated tasks; they are complex, distributed, and require intelligent coordination across data, compute, and communication domains.</t>
      <t>The DA-ITN architecture offers a forward-looking response to this complexity by providing a cohesive, scalable, and intelligent network ecosystem. With its dedicated control, data, and operations &amp; management planes, DA-ITN not only supports the technical requirements of training and inference but also addresses critical concerns such as mobility, privacy, trust, and agent collaboration.</t>
      <t>Ultimately, DA-ITN lays the foundation for a new generation of AI-native networks—capable of enabling persistent learning, dynamic agent interaction, and decentralized intelligence at scale. As we move toward an AI-driven future, such architectures will be essential for building reliable, trustworthy, and efficient AI ecosystems.</t>
    </section>
  </middle>
  <back>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact fullname="Hesham Moussa">
        <organization>Huawei Canada</organization>
        <address>
          <email>hesham.moussa@huawei.com</email>
        </address>
      </contact>
      <contact fullname="Arashmid Akhavain">
        <organization>Huawei Canada</organization>
        <address>
          <email>arashmid.akhavain@huawei.com</email>
        </address>
      </contact>
      <contact fullname="Tong Wen">
        <organization>Huawei</organization>
        <address>
          <email>tongwen@huawei.com</email>
        </address>
      </contact>
      <contact fullname="Reza Rokui">
        <organization>Ciena</organization>
        <address>
          <email>rrokui@ciena.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+29XZMj55UeeM8I/ocMToRUJQKlmZGti6K9dKmbbLbNJpvd
LXFvE8ALINWJTExmoqqh5W7oyvcbqxtHrP+cfonP9zlvZqK6SXHk2Fm3HUNV
FZD5fp7P5zxnuVx+/NFQDXW6LT65e158k4aHtntbbNuueNOVVVM1u0XxvNmm
LjXrtCjKZlPc7VIzVGv49ZC6cj1UbdN/8vFH5WrVpXt6zpL+1KQBfr0uh7Rr
u/NtUTXb9uOPPv5o066b8gBv3HTldliWb/flPbxqeWhPfV8uy2rZ8DCW//jP
H3/Un1aHqu/hJcP5CF96/sWbLz/+qDkdVqm7hYfB4+E/axhDavpTf1sM3Sl9
/BEM5Dcwpi6Vt4WO5uOP8Km7rj0d4Stv0xl+3MD/KpaFz51/lCnK73r9TKUr
ob8YZJH05y/WbX/uh3Qovuxgjv7A+KdX6V9OVZcO8I4eF6Q8DfsWJ7Ms4MPF
9lTXvEB3XdnvDxUsuawR/rntdmVT/anEdb8tvjqVD6kqnpRNuSnxz+lQVvVt
UcpXb3R5/9OePnmzbg/TF32V+n15KF7QDnz4S/b0tRveuPCCAmdFmzJ01eo0
zE7u53/n32/58ve8aZtd8X26+Pjw3AE++pAefdqr9KeyeNW+PVXT5z2pUhOH
2XX4uf+0xl/bw/D/NW13gO/c4934+CO8ef7zckkfgv8U5aof8Arjz3fdUG2r
dVXWdGHqutrhSS+u7p5fF1VfdOWx2tRnPPJNj8+DYw8XYnOCR1SpJ9GQ7lN3
3pTnoq62IC22p1SnTbE6F+XmvoSH9fCF4tBuUg0rvN5XQ1oPpy71C7tIxRGW
flPtDj0LG7jfJV472A+42PRpkk4wNhA+uCr0Mbz/p8MRf74pvmofcCDw0H0q
0nYLL4GZN6nnQXaprspVVVfDuWi3eDU36ZjgD/AsuO7w+FOzoUfDUhy7Fobd
JxIsMkR8iEmCm+KLcr3Xz8Hvh67dnHCqp6b6l1Mq1vuyrlOzg9/Am0FcbeAU
8LwOcOx2JAcWMIPD8TTQa/GHpqFRwxgX8OzqvlyfcZFgsRdFn9anjv6CI9m1
MNcGV/cGt/E5TGIP20XCdVE8JB8SrcdTfLGJ8eLuAWTk0iQ8/UUlv6mDq6d3
y+dvvrn+65//UuK0tlXaLPC5ckrgPYdTPVTLY102sD8q+mBh+2rX8IzLzabD
FcJB4Gkv+iNMsTsdcBO6IBKLJqUNfweO9apOxX3ZVXDfYeLdfbWWs1YF9VM8
VDDphp4N+5lU1N4UPHLcnvtqg98s+nVZ01PxIXDrj3g4xkds2JeD7kKPj1zX
FY5twRsnj+vgZz7N4RclrWv4hQ672JZrPHUlyEM53WHT4azB8rSnDucHU+9P
x2PbDbAEm+XQLvF8DqaRq1wjl6KuwpLQDVyf1zDP9ig3pb8p3sACxZtXbFNJ
NxC2alOt6XSS2G5rniq/wB9R/CKc2uLq27sX1wXtes/71eNDww1bsLyAWw3D
lRNbrtdwxQa9gys4ZSk1PnqWAgMIFR4wrOWx7WFkfrDggDftQF9p8KygSAgr
1Q/wnrLbiOBcFKCE8Dv2eVxg2BVYDDwSXdLj72/AJ46PE55UPr64ZjfFH+Rc
wgCHdt3WvX1rU23pkQO8uao3OKVV3a7fwhlGcQBzhgXfVk12zP1L21Oz5kNR
0TKwUGeZDVppUyf86R9QUNPVxs+SDH9e7GFGKxjxAeYGW/VHGNC2XZ9I8nZp
jU8/pxIPIN6ZohrwL9v6RNNXGZ/eHWEBcdTbDlSrifWh7GEKdfU2wTXap82p
xs/ADPAg1+ldgQYXzPEEEhH+u09lPezXJa7VGxRKYHs94EvhGWW3S/CmTYei
+e+uIr7E8/KuxEEveCzwf5vdCQ42v7wvrr7++kV/zZN9AuLg2cs3i+JJXZ42
8JVnoHj57U9TOr5OCX562Fc4bXh70z7A6m5wfqdeTicvna7MkN4NYYByS2r5
ARex5Zu+bjf0X5soXL763Fcw932124P43YMQf/7XP/+33jUzafriCGqww81Z
tW1PEolOCuoUelxfwfSr7RlfVy9RXhTlEX6zVmFxx+MFcUOXB2RJ2bFkhHOG
TziA5oMzXOAfWDPS5OFdJVjseDLgNG96un017I4qz3Tf1vd0uloWd/jMhwru
RdvAmqEAqhpQnXg18KOJN2wNG0Uipcp1HF44EB0HHBZdZXhj15ZkrIDegcei
69Dj7eW9BT1G55pWIOFzYbhbmXjBd71nnUVnciNbVQedt9aDQpfTTA5ZKjiH
aEjBI+DOtTuw4ESn4FhhUclWYONDX4dCEwfZ6N9BSWYSXeWkakO8c21DGvMR
VVk16/q04c2YUx9T1aFKzg2LRZGGdbSrXD3Rgc6/XBbsneH8QOWhtuPT0t6r
aGcdT2cTlh9ODRoEuPHgkFUNaqGF2Q4LOWqodDZByYAWOXVkFoF0qFA940BP
vSkeOUa4GniKTnrRwIYCCY/2AiwymzEdKTj94shIvAEfpUstzbzFfQdzsW7P
OLKwfQd4LM3G3goTOsAthvnDuN1SE+1TbhOIm26DEqKjH/VatmDOD3B4HuCk
LVcJ1oyOmN3GkgwnlyWPmZLRqhiblW5Gjg1MmHdtUqerwH3HRRyS3GBcfLhU
esgf9nAo9/BRUga+JCjzT71eCbhqsOglqB9UFygxUDo2aCKhsqaLkVmBomvj
88y2BYXGU31Iot3Qdt93KaFUXw+4qHCuepw0elXstny48TQ7IJw3DBdE64bV
In4Czg1tB/xWhwyvhs+DNkHfJ0pUlgBsAOxOoB/YVKBzbqaFGiBqzD/gd7Zt
XbcPPc1XrKFi3hIvJHaSW3lrdEF4Sp+MHIDyAxyAT8wDED3uFhH+0NxXGJ/B
2aI1hcevS3AoeliU3DPQwZHp8TdbnHAg6GhFL4N2Uc45nVK4yOkejx0cs7dp
39ZslLtYNMseT5pY6sF6H9v7fDSqbrNEQXbOjHocEMplWPF29Uc+FThAGJc8
gbZgZKuwcSKSD08fjB0kU9mdo6Atm7DqbO6CwJcbl9mlBR4y2PEkZuM/FM9E
ez1pDyCRJvEn+tA/gDW5wZtgehBETnPGo4wGAxnk6+pYDuz243gy05jk37aC
2wxzH0r+RLXhr9/Is+FBZHb3fFIyT5tkT3FI4C83VX/gy8JarafZ8/dZZ4At
zVoCY2d0f/3HtgNtC/8bhK3cD1zhfaqD5wGLI4cHTIMKr8dZL/sGnr9tWcEk
NmNUmeKxAYOc1wCO+QFu4FpsNtiR6MDQCW9hbB0PHFV7cFLdQ8XF/11alyix
4oKq5o5uARkqLGdsP+jo6jr78qmjAR4ECBcaDb4X/oojhJUBWb6mJyUMX8jX
wYQFG+IAuzeIBBhrR1iVrhUvPrjZOIvMpsYTuj/Bxb040hJXHJYdThooFjzL
PXgUsmtyBA8JtnPjqo41KB7cZdX3JxQdsGH4xLImIwOt8NSZIbvCoawqcIaG
DkT8PawGnzdyLGHFw3rpDS1BUnQY0YO1BlVcZrZXCbuxq1A720RIhxTHPRjl
a7IQN6BfN6eyvim+5c3mY7Cn4eG66CPsYOj00KzesTAaWnAnQUe/BVWMP6L3
QGbYQlQMaeHT0Dbtgey9djugJPeH0v3ZtGyPt3hu3rd+MDRfq3Lg2G1iRwCd
5B4kOb3e7gqtB1oFa9RRHTnIvIFkjNoa9adK4l7w7arz0TYle9J3jU0+fGvf
nuDSchwNfUV6FWjOng9zyaEnk5AuClSHqziQO2rnjDY7iW1LHglbz6PIxAbs
I70DMdJENhvuDt9U0Vi0y7x/Ny5Uv+C5vEq7CmOtPJDXomden1b9uquO/Nu7
6QTu4gT4od+yltYri/4+C1zUv7TS2dqT/YWXjSKUvAkdDQZGezFsFodrQpju
fJDCFGSATVnjqeK75iuMgQUwA3oxtDbwkaHCc3tfVhx/W5dHXugq9Wp0w0aj
AunEW0/ux5DFxWscJcpoqPh3ld4DaRm6gGpMlOhhrlqMDKEnIjHbPNhnxw9d
DVNvQf+B9Mhcvi4OYF02eOFARYHvS8YNaTrxNFMDZk59kAAhvvxmJPTRFELl
uU8od9HtJP9FLnW0W1i6ykmMaQI0WpLoK9dcch1UXgRRQwqALTLTNH77xSnI
TOGRiYZORluToMgXw5U5KyPfHY9u4lIv0YRGqY8Cv2Qj0E7HGe07vSQ41Qcw
Zvp9deRwOv4SfrBxovxdhxCrx0HhaA4w85+q48gHK9668KNtpNh1mHIVDlBU
ZYuReFpM5dNN8Ts5una54Wnl5lANeI5glG0+pIX4KOH1eqR5vUXfsdVp75e8
B3kvdEHp6VGdwSEES+7gEYswHDOwMkFCjhyc8x1F8/A0wCFscZHJMvI9w/Od
3g03Y2EXFLHLkJn3w5k9gagDbaDqGUbi5k1dVocwm4XsdrbOcJ4wWlQ1ItKO
+ANlmDlOgd4wfAkTNruupNUptzjJuNJqQIjrwt5hZrjD2t6TOjugj9g2su2n
JixhJlVopQ6wZesq3nkJweIZpSjO5FBiiDx4SJMTqck/VCl138YUlYd4QpoK
rzHdpWQHFFdyC4ZCR8stjz/TgFEps6jYyA30GDqqnVVywxpWRe9dPkYMknWb
Pjfng9RLDUUKR/t4bHGtVLhJWAgVB57JZm33GK9QIy8Dh4zPpeQWwgpw0HkD
I10PcL5g30iHcL5IQkPTYBHP2VbBrrbEVPiKdfc84oseADj65WZTsQ5rs8VZ
+EnA9ezZZFilTD8tMdzsyjIYlDwrj35MQnvlCd+Lf8IoNTthdppu2H7Rl9P5
yUZgrxTzmbK9rKZVOMRzTho7Wj3x4rt6cPPS3OQNfLJsyJdWkyNqt3huxIzo
gzsozhwGs/aUxaPkC0YLd2l0jnjyFgrIlExBrx1AxvDNAwWNgx7ZABTFi3OM
Ei23RtwDZp/+gr1BR9XvFUt3mcO+ffATEtdhYY54bgWb0M4NMJ13GDj89nQk
BRJkpIRASILSn+JIRd7KoeeoM4UUc8t64RnccJluYjDiSXSvQTOy//U8uNh6
MoNZ7OvQqQXMEQA/i7niZaHkU0iL/HJyWEOtlczlL8pDy/IY7OV8tChwp5Zn
2Oth37Wn3V5SRjbq9A7vwi5Fmb0wq9hPu8fJC4+6owi0c7YGM7NctSLx8NBs
MYB4I0nCLMqg6ADKJ51Hlmc+ZVghhFLBHf7KrU/zj1gPm2VpsyQnDxbAMmy0
Ax4Y5QdsS5jbZ+Hr/uokGl584sym5WB38jD3Z/p9FFhzKyq2LspBtkvwkU3f
sh0NX8EpwnDfNu1DnTZ4J/TJXbtqB7ES1OEXW3u0CurjH0F60HH4TGEjMfRn
ccXpZNkBwnPuHj0lsyjypYYeye0erF2YM41eIqz4PVsRtVUyP5ZtfYzkafZ6
gYKNHnjJ4s8PA2iW0q7tMzL8ODoP/7Pr+ZZrjHr01dEyILoo2Ah63YJ8qFMe
n2fFPu8koYJB9Td5DYtNdM8OCA8YSL9UGKcfH3NbEcINNHBTyk0IEK05juoC
mSQpQi22OnjcAIp8H0dD56RhdOTkfi9hv44IY1wUZDguJTU+ijP26FfyNh1P
dNmXppAnXp4kaBEWobnaGGkW02Pk/VQUxAejj453OAVjMRdUN0d/BbXwuN2V
O6vwhhXJxY6UEFlNA2OWSjjLFnBPPFtJ08BfgmUiQpbj0n1CeMCQoowzw0aG
oJGlfqgYixH2z2QoB37XGqH3VNX4NHJMqD3pjSGth3YGZTRPR8y8kPuPayi5
dbXb83MXLGccLVlbPLmka2pnKuR3SLSeE+Yf7thfisKV/kRRIJ21pm4TXAPL
fGVGrRo/ammd1ablexHRChYavXQZYUg1IRF4jHEk+OJd5/KdkEIwNNB5sJyW
gVQ4cA33os4MxhHiiC5yluKMtjvNAPQM6gEC7lEqi0bFibqu5VA4R7vyrZG0
Lt4P8sdQWcqAQGOZgoUT9tBInJOPTH3G86CJr1wiSUxhXkb6YXC7mGNu6CL3
HLQlaUagCTQfD+h97pJpPZZfdIyyjNF0w6NUUPlvN4FTrWTVKnZuRhgbri+L
JJpJ50lFDX5a0Gecj5p4LZSnwcfPIDNHthSnpcR4u0o3u5tF8WIMU3mBT18U
X3/9gnSmmhzX/AnO1/FBEdgOwpMQd/Mn/PZGE6jl5o8nTokzikrsBpQ/B7wM
9CAUgQf0QMLRK1LXtXJdK8kYLlh5Oo6UgC/g/ae04T0DM0exTmgJ8BotzJii
c9YeUYf/Sb39Hfrue74p1ZBYEFOqi1wHXFN6CsGHsrthaAMC1xjyajy6NQY5
WYnBYf/TPaHhWhig5GEklsc2mpQPPFeE1bp00N8N2/3Fk4SSvybLHM0IeN5T
Qq3pL98Y4h5UV8AlWUYezN57Ads0GG9Ai5WcJAPz2LTzJDlHKxZjiE7ZnKcL
gFIYHo1riYqCcTsEHsYTSoK+DgCd2/hO0We4NDfF9+Qv+V/hOKR6i9Nqt0PC
WBVsGFxVRpDs255Dgaihw6KQUQhb/tc//8XzmSHYIHpsZA9JEJBfTKYwBXQ4
m97Bs/i04XbvKSwsORgJ97BXakvDMBTCo8Gp27F7wjArta4tLF23u17MHPy1
wDmyRApbkWZ7g7To0AgTYeOR6l1qQYUc9wpB9+gOJYoJDkJZQX4qo8rsS2uc
EMbnJP6WGSZ9Vbe9oWPIIqfZDuIogGuw5CllUD5aiK6k5GZ6B+vV84axqbmg
84ohr2PbIzjozNnTN2BNnxdjwYdXZA+3m+CxcMZ40+CcPZRnsmvIGF0pcoA0
Rbu9zQ4H7Wn4BeFo4gc8OAS3pnnbq/6Ss2dRxXZFNhROD/3LxOIfrs+eMnS8
/qQYRSajxIoKWdWggYIQKLlNZV+hTiEYKT3WReyKE3AEGmKJ4ifWFumBjIkW
8WCzOVcUlWQjEPpV0aPk3zUYJaPHLRljs6RXY1IN9FG3o3jB5p6uF+MVJeZO
h/MWtpneg+vm5gt+kjGlCDs7SMK+HVhl12e6EBXbxPg2kuayIXaVF1Eo9HRM
ywExt/lS8GGDKwvXFm6sqfgYlkUIsAVvddX41N2t16T4d/WZYH2GZkmzgn8x
kfN9AAL7NYun60qliGyuiDKMpx3Hs9HpXo8PbXFlKzH7RXnH9aIwezSuQKaG
+PXsM0d0XK6cyoeSwqK+BghZ7AV/UnYdGlibiE1UBDCMMWoBPDMB0Zmf3axY
pYqLOVdREpGVoawETkYju5Tj+MCGdtM3RwVKmsevphcfxOivIx8FnrBpj2L0
rVNNRwpWbtViwCLsOQniTG2reU0wGAxedlx1gQZEz2OHyZJWaqLhnI1xZD2j
/mHDeBFcAbVRF1j6g3FdNV7BJh2WVg4Uq/dUHN1TyAZtJXXXMmhd8DzI2sZ1
ygxC3tsDLBibzKDK01rMI0Whga8Obi1LEns9fZkNRfz0qQmZJAJ1k3Nn5T2k
0AOkkEUXoQPTu2PdyqXIXcRJyna+5Ea9QzqODOzjPCrC/zMbnOZQssAPtiMC
6ZYNu/x2d0ZQITL1otlf/A7OwVt0nvCPTzCHwwlFP/BTr0CQezNXVfxXxZXM
fo/jHIS+Je8T3pUeWKXCWVzSp5Pu6cMepA0ClFg54t8EpSBIdX0241vsnafe
DgijDEp0mHkRCzHORKF2x5PkpMIviiv8xK5uVxhphN/D4bmmy8h1HqmhPGPF
cOOAo1poIoxt2IKqXvF/ykPk5BCmNQhKMa8URe52UtBMMsqL6yoHq9ecC1kT
leCUU0UKYmLBjjKJcJLAoAowGkKtPjGr49fkwhVPRSuKP3DXx9XA4MUJHOXN
onifIXvBIIqVJj4FRXrrzewUD4yuXKjgmjPQe9FZo5Uj26XH+Jod/B6+jrsg
vlYWT7x4FQTMYRdidlJs9odTKiWMfi3IKUvuanB+5AQ2Da8AnB8+PqixTipl
zTryKit7paJh4Ubudl3alfNGEj2d1jKePN4ttNLTJi4wviubImFLOwM16oRc
gfaoasmy3zUEzUBQpSlSqpX91SjI+S18Yw+exy0derha8AIesQ2YTQI6DoLV
wGt+rMs1p9MQ9Y6xml7kRjOODpnlGAxkhttJUFP0JB6kUlLbaBzD/3moNpYs
OFpM91cgXDcop2F+/8XSGG/kwN6CS90fQTOYo2aVXu5XmVmkO84JlTVYPmdK
g9ag0mlvO8m0rbOcCQE90IR4o+tCghkj852NTT6EcRc2raLm25wsiAmqTcqH
3EzRoCuH5akwQTLoZbgY92V9SrImbyqqBHiaavQtz7d87TvOkQac/kKQ+Knf
NxIEMBtWwuI3+BR0wapGN56yo7TzGwxqCtab3q+edpbWY2xanxddkUB290AO
+d2OHvE85LCu7trnWI9LVjrMq4DhHOl+VFSiguOFNaOyK0nibgpB02yokKMM
p+UFYfRfYEKHJe1XUs5xyz9yLMKdWdUPPB50PGJhG5hTB7Age8YWtIsC0yYt
oVfRRQlwM4GOZisY7RIM8vcMwuRyH7HeLdy7kb300guNG/DNpLovPlLk1YTI
AUOHvjuVWgiuccmr79rXGAq8RzcADkfddoZdnQCoyZeQ/ZLn0ASz5Niviq+y
4PgdK8YXaVOVssIMh+4pv6QhDp7IY5kzqmSLsOueIJn6wJWB2Cm5gWHi+6os
fvPs5UsY4AojTy5wOHMvwZfwBAKDSXXFQ4Ur74UcpLI0cqyGTogNeGgYJpIn
CCwbNMXj9edmve/axuKYUr0psJ6FwtYYWOmSPFq3JtU5x6XFR+jmHjCiZ+cl
+FFS2AQGRCmqL/dhTKhYaL5fhDgn7B9nl1DZsxthBgEnf7loOdoNjCfZ24Ki
KULZxsz9t5hgBrZqRcDwVaQRBBMt9yvLmoLTDNfg8EdQizhOjIWgXoeFmRAX
3EyNK5gIGlY4wK6JbjMsBlb5suJS7w6+wLoeE3SwlmRxzS+u+mIscjJEXZdY
UCYFLIsRhJaIIoYykgZVyXxz26Mk0c2OumhksM6rBs6dU3oatbEE8mJx91DA
izmoE06iLGTd7khMiPfFkTL8aX/qNvV7DQ6ODbMgkzNDTst9i0CrhcWE3JTi
BdlWVDLjaRnNPsW5ByXAkXhOhmKZXp+bLXVbEjfGY4JoEWxExM41bpwsLW6a
NlQrBjf7YCD591oqFt+iTR3C4rtNTbi4PR69aJjM2CKvE1e1y2GN60G7zXrJ
7ZPJI9xSgemY4Yu1O7DduswBeAHHnESy1CJixJvrJadPxjOPEpoWUgqSZByG
4V8lDmXaq0H+r8VnmNQkUcADTQTK/qUdJkM6NPeCebQCabzl7FWxhYPVMXwU
xQILJzyJ0XyqjEaFduZLNZLeY0xdsmBMZrph9RLkGf7wJ9soO+hGnBJtQPih
bYj44d0RJUDLAL2lW1ER+1PF5CnhUd9xeo5M8sFYYXI5EnCrpyO+i9AjQv0R
0ma5v4pS4E4LHyiaROumOfUIm8m9cK6KUbtzwWF98zvLrEguJj8DpomutFdd
ZCDgFZrN7AJ6NKC97FWaDBS/nrNsvFS4I0nhoPMFZhrjCFYinrjJDVgU/8LG
2IJ0hJGNXTJ9SRY/UFiqpFpPzL87VUuGFGXWAbDFCWwgUWoLvlCRXG8w8qqb
8Vz7G89g+y81oEOZmY4vJ+ZfBhTHQ5GFxWCa95hSQPscT2hgtMCbykX2FP7i
wmoKycsQ8SgZGMNCwygStJD+Ek5EhZJCX7gW8jLEb2RgjqgyOL0+jvg4AIcX
2+1EDd8iWojlVx5b2GTiHQ/nnHWqB7Cseq4F9WJygk5VxLXVPrD3FS6BRXvr
c7gMo6OfnyO2sqykaWyNyzJ+jgk7rRqjU00fW/KfeQwYPNWCEoyHO5aG46h4
YElP5SUlgSJGRXvaiKWgUuWpVMS0XX8r4D7Uflo9Iwe0Z/aR1dmOkCwL2VL3
CUsQTRZ+oIQwkfCFBdLtfPqgTFah4bqUmiY0WSRDsXSAsEKBnbckNSTSQiwa
LEUKRC8PJSEbV+dZGZWsZJCEwcysJSB/L9WE6xptnU1CJ6+fzDp6teBHLjSI
j3dZJFXBxZKg5CxaYDijPfjjdLHJpGbNAo9kd2eNRBJlzbFq1VNw0bC4v0P6
h+6c7cjVXDnB9Q0r4bjwBPYJ5BpofxMRxYYxRsIXgcsiomtzbspDtXagnSFm
7ZMx8xWvVJe2ktGz4qTxYZB4sWVnptCjueODQ5NIMd8TqgRQTidm1IGVoWg9
ZqSwlrxO78jNRZucqDOUYGnE6uX4LvEoBWgKKzuU0WsNsTDBEUZ8wehCmmx+
YUmiWwvm59JbKGpkwYxgDk81q/GphO2l4J3vCuPaz341ZcAxxXyPcZ+prM/s
b8yOeD7MBxjL9jLEIOchlkQuAOcW80tlkHCZCMwZVVTWBd0Mm7PxvJjNBYPD
5ByR7WAZXouOwJbzKj2KKKZvlF0W/xBb6SZslhN3iXbKDpkAaAynpipUYGWT
CMrz9o3jva8DloYuo6Fjc1jzWF9bXMRS+bDQh1oywQSX1JiIKxgiMzDZ19NF
NXooQjyVw2Juhw0KG7QNvPYczdxIHkBxEqZrctCFltmGiDYHguTzImHwXI6u
zatQXFO8AIF5G4wsCgCY0amnHv9Eyax+ms3SyBMJp0OCF9jejaKKbApSZrIU
wxnHg6H2vhVgqxksnuII4ZlohVcZZ6EVYeSlQ1wuogMO+zt33RESAkLhsBJw
GKXPQQfgfkTAj+VMOPVSCc+YIv7zEYjAD3wt8Jq6bd8yydkBFRY+pz22GLmw
4IcAwpntB6Z/hLGBS7oVa5CdBjEFMeaMQa5SED10ywkyQ8gwXWhmwoGx9dtK
AlB5uQFXc9IjGVO23reorCmARjVoGteRmEfIp3OcGzGjGBhnacHxVapzUfec
cgqIATHoIJjsy12bZwsQp0WQntsiOGt8okeOmta2K6Y2d4WM+5DkkdYTDhOP
dF8iLWHmNmY1cpJFhvNwOA6uhAmnKDcOi+JBTtP0qSak7Jad/YazAb1qZCuy
jedQPQx1z8lDipW2BzRs4O6TfpSwb8Ppl14c1tVZvWuy2RAjMiVjVRavm8l6
I2PLsbcyWvfI4NSfBk5oYO6py3SIFN+RvuoZDks1OrkBaRSDYhAq5ZNNvzqa
/7KtdqfOkyV3lmSqzxHLPiNx8TUZykQFcOWCAFYWfUcJDbPpyz4Xlp2wsRnp
hjz/RXd2DtJELCB6okL9hNUakU/e9gH85LEFubZjZymWChZ99AToMAbrhCxd
jqccorF4U/xnfAeN2gqwFn4Zo30kIj9YhEojJcbF6BbRFPn46OGOFHWqPzOS
JYWJaT2OU09NMPrPH91kO3UzU8nuNgZF3w+UCqCgXj4vdjrFGBo+DQWfhply
AsZEE8FdgcKjIiLJQuwozQCS2zCJJ5CgFwBx/mc4FgfyHcSmRp9ylr19Uz40
S/nCEhkXSIIt//EfNQNSCl5bGdo05BqJ1vw96ITQe/Da8sPjR5f/+E8MKMKQ
m+YsixdtMEo1q/fECjgERsWpsvkETx7AzLMb0aaDL3BYgDBDOmgsXonRHbQl
zLzLog49ln7x6chAmgJ/EfXriRoO5mslfWNspWyLkiGYkV+PLbIaVgSs92MK
AHNWZeCZIvzJsZqBxVDAzei/UwrY0d5sIcEAEv21DeECq/0Xqi4Ox6lH1bRI
xoyfD+5URjNk5pHlevOZUiqJ3HNw/U5VllpwESKYZftS4ElwpTqWt8IZ4eDb
BKcO95SRuu4PxJTbCNFnogFlvIXukToQzlEit4azAxQ74uyAWzMkjsUBpkXk
91vSD7fKXtxiqT3TFgjTdmP5vhifGdlcPWhmRPTMhTqnio5hBSPAUkHp+kM7
uCKa47Zj+UVBeYIGMS/gqUnvjjxmk5z9HjR7+9DQ0diCMUXU1Lh85DR1p+MQ
neassBX/aMksTNMEegIWZioAFKhrSo+ZyUmgczg9+kcSAlrv0/otQZ5pf7ew
LKty/TaMXU0JvgdiIXZJStmMfVZvSZZViIbXrlPjSMqYIhfpdGdG14IXi8pz
Mgn0Qbi/HxUYfo0CdPTbha0Xwefx3b99FlzkloqOkRN7WJIRJqMWXT2GM1Cq
LELcSZay303ITbznXKBw9fsvrhfkSNjF0IMH9uXbtBEsplwK4z2XBIVFmk1+
0kRsgMZvmllCFIBHuw3FIlVoCLrxwqIMmHsXhMIES4EHo047jPIZsZjYq2bC
5AjjwArNiBLKP2jeXgxdNvUmiVfWmS8VMf5GaBo0sfetMfbgr37P4Uv8lnxD
7GUqU6Xc1HBirlksYa5sgQyMH4LyjpQOv7CYN4ZkSCrcFM+QfScD+hqg1J3e
mKfzlJKxrrvhhmdod+IGDFbOQMZHXlFKxyUOWBdB2DXh3KHkrkhrC8aU5E4r
9e/gP8PFJzcOKUcVCViRijgFkcJO1/IJUwgV30vd/jOMbtAefSHQAk93VOK2
MdS4HEYL2YX1E0Af0Ze2FfOJS5ScBB8NkPybDaekonEokiIRgbkG0gh/VZ8t
4cfmK9XTVz34bUomBcuzzXZcK+dBaPaepaAU3sAmKk0Ts894fx7A74A/ZlPD
m3ZqcIMZTmJQylNjlHdIOVl2B6yy8DALagJarVZxeZuZxC1GqJ2SmcxuaUGA
lfpNe89SIyOnZNIhIiYae+lMBoURIXgyoj4koMLuD3UCgM9WmKKnKBpJRQvf
gmnKbNVcJpBVpMuVm3RR4FLf7akbmHqHo7M8RioJkDGPWaqkxgu2CKt50e53
cNONSok3ybnbX4Yy1hdmTZFaalj6GXUAE1Vn2fQq0Ejn5dianuXSJytiRLwp
5i0wNcMCvPQSfy6miDlmL+oiYGRyY7VHyoA3CuKmYB/73JQjQ+jp0ajSvCb2
IRFhEC2uxOR6DlfFofMxRDnXk+LqHy2xi1W+PYOC4R3wsAc6e8IFSww/UfNl
NYQhrO0FYxrcGDDrczZiG60MDFFOrWEwgIMslFSnBC1Y5YVUXtAR9EkGgqvG
pZrm+jhehyLjymWjFcQ4UxEhfNmYdWFVEe55tJWG/i0DnNyLKXjl67MVzGsr
DIcFFk/55NNkNEwxl6A0ElpSNzIMdT5gKPhzAEcEaWHX2O7RuAT+u/Z18QxE
DrLMJWYCPrfKZSbIcDMYLkJVt2ZRBhztQgpaNmB2N73EivF9gahekssCI8Y/
ZneVYId0lGNo0LgOZpod7HQurE0wsx3viXMzFlf/5eXz/jp2RPIrovpRY4MR
ORHcasZQZJEXDj4pwJoz4iHRh1GoiCTQcoApnACheb8ySAEjRe3FOHRcQZst
xynE4h9VssSL8jlHlr8nxe3aQuxUFUTSNEAjkmCMuHeQLSY17qBlkAe/ZmMn
7lYgImXdk7gIjkq8uhITG3p9m7B/rMIOiGKPC2wYWfHpcZOUzzUfg+8SEShg
7S31JFFwl5b8ZGBs9dYYe71wF7WVKjP+9llHLuh7uyFqSX8uXlIn/N28f7a3
k5C7RMIxhFOtB2sOZZEHS3KEimjkusVAAIkZNu39fiBkkrvdFKTkqTmO9V0h
bnbM2HFSgoGrGuUhHwXuLd5GV8WOQ9foA73XwgGYn+caOa+Bk64JgVJtX7JL
hqP4Hfd4UCKaueC/sTyUBn/JWiFdYtTPqLWiFTe6rRaRlgIkw2SL6YWGM1a5
B2wVV2gS2yyurOdIvcB2KqhuJuzkE7onY+GSjoXJiVVW57yy2wL36sbMQm2N
eIWMaAyGgzC9z2gUtf/UJusw4Dm3DFGb7ikBqjWD5l1794jMz5ZKZOPXIZDJ
2HsVnrHLTv6I+RvewEivVnwNY+YmKgNLKYqT33JhkpfvqrdqlgrlCLTw/cj8
QDrDSS6GdlELlwSlY/FEOhGEM7DDE/ZS2+OMs61MzcxCZyGdK6iLVTilHrLK
+aKxyltpR7ZdyquRZ9J6nhYuNSq5jvdReq5ktD+EVukpTEFp6kCjye+vOuKY
OhzhYyRBV2Qssb1Xdb4Gmr16LRqvRUYJF4hlxyBRtk6Fo/d2XqA7I7cic0Bu
Hstqo4Q0AZRzLM9YGrxEg9ZO1UhRkO6h+CDxMKkOoEOEppXbAzsQNagUai5I
R/8B09vN6SB1j14KFlDiXMeDDAxBeQZFR6gr0StsVssAUfv3NsieR9jH1m3B
A+Rq3oS8jygZZRIGaDWZJ7qzUppYJX1zNR7sUtzg2EBvzMKkZybYEqFGLeNz
YJSpkUI6yikKcUme9lqIMzAVZtOiXXMfm2ZGCyfeKCddM4Jkr69BnUXG6sRN
JY8EI8wNG84hdKZmFezR2qH7shCHsuMiQvRel3z4g0ue6Wg02aWmVlUbLQqB
lkbML+gp0O9VnK9bKy6yrirPvRkwJpS0gU7cAXXmqsGdGyGd3lPua8R8sUq7
SjhuubAt+OVcyVj8viE7zq1lI50wpxlFWYusf5L/h5FX7QYNvctCLbRFstIb
Af0psjjLCdHRV1gMJQiwcUR1SL0apvR60CA1Omq7fXg+ZQFrhhc8Mp6Batzx
nK1zMkqWjMxhzDVWrP2VTclYHWPHZoGRYO8Y/B/s2HNcr+SmquLtIY5CWPU5
Ooi/8Mc4ufAG+fOwOUko/AGJvqHGFsFibjZYUkMt2bg1m5PSZazbc82eAp1D
n2YWgcOm2r0imkHZ3HM+tpx7Iade+N7xQcKgOX4E01qRmkFiAS48cpqGyJ3g
idOrVltjXM/lHcp5pgHr0uLZ18DUKPKcYwfEf02cLGRGcrsUKzfActRRm7+F
09EOsfcisrUcU6IyZercxwOhZIeMg8qNWK4g0kE5tkBv04vJ9rtnBG9eAeAt
66SMe8HJO8UN8IHSMJWY3Me2rb11IV0e2mTOwsPCJ+rO5jul0fIrZufFlQ/p
7GvishD8FJHKUbG51CNSjAsNgy405WDXDafmjRra04BfVSJLdq+oDzR6szT7
HfqdLHjdA1oUoWZMTGjRkVLMc9BzrhooyF5aB40neREXnPctFq/ZxZdLykQV
BOXvuExu15a1xguzJmZoobk4Q9o4UkswcPyazi2HP1ngWXVSGhcrLwSWkjs+
y1JFBKl+BOLC6BLTZWeVNpF9BpQarPOgLGNUlbcCSdKsDUUfAcCqN8c4J7MQ
tTWjXpL8pOZItlZzQ3iklhdAlXPyS/IjWaWq9i+kbciayWmZhH4j2PczZDVo
SUTp746ukINQm0bLkykqRD1e7lVGcMah5dCrJo0+QJAqJaUjzOxlGkObCECj
tr72VoZiay/0QUzFpABx8qHYXBC2aad4DgCAyOOncC+GkODlEwNBWuVx68qb
MDS6rEGJia1ErPnRXY978Nc//wWFLTss4AFFnar3UvPM5EWXxTY9FG+rul2d
BZ9FmkPA5PIdul9k1VKYeBu4NEUqBjkPTmTq48Iryi83gUu7WwpBlKrN6A+I
OSNQ+Ll4IloyfSuU1lgezfcnFs71vxbmcaUrRTZwTasxHN0aX8XmrbpQ1DgI
mYFEEFs9pVoVkr5TQJjCbqiymYqx7wWF04O8HyYBeMVohCrxO+7K06svb5fT
UnZUVmGIGwk3VHziHFMqvXXkJM4Q34o46kfhxKqXIKPlM0bUo14mpSMz+h3K
rTIXKqfCRp1INWSXSQFN6GvpNaKKVdEwRFlcJgquId+mAoFurW6wGn5JFmw4
85R05pwGHhFmAGA6GnaIJzRZoKbb08bzFfjBMFYmSUtH7GYu6RsaxS97KY4c
N3kdc7UbuAZ111koa0bVfOA/gfXZmV2jFvIqiXYTKRuFWxPniuSrbc9dpsVm
tkALkikqW01Qdc8bcdhQpZAI6KJdpeHQETeUm1sTDn2SGbi6GRj2kA5tdyZz
ce00HF9nSvO2+NbbOAu2Tzv7usjXWLP7ZlsmW7NGcGMTC6Ntm2g2SJg2gvil
TxxmyoWQntobH6kwhCs/q97eHSFNsHo7jtuE5BmItgH8iIQd5hl6Jhw0Po94
9hTnNOUVwt2GMRyTdWPj8i0iHZCcl3rRC106B1noAlJac4AVszi1W/20Fmi+
BIOpP7QonN3JxlyZxfVzWycyjmpJw0FeS5WNjIWzxSuxtpPbx8F1XJ26TULd
zBBELu9G9d+z2BL8Mt66FBOhFH7uW/BQeuk2anM0iBqjrpSmwqMVGqtaezJC
PhQvEPYosQ6DdKD56rJbEED1ThQY2aCER9G4HmT2BARKQg+QM3r8ymwisUj9
nt9eKHOYXouMXOdw6XljiKUmH82BDHKwkP4u3Ew4yAF/+f60WpBwphmrCxQW
25RAzhDjxF1yNW+IbkPLsfHKwaOlcwiGTbKpErkEHMUJcBMEp9Q/q8Z23yQi
MT0DTry+qT4634wRYGZnjtBelRZ9CWJBc99EHW5bCiZZzsGxjRDWUsuiNM6h
sklAPxwr2vrLfJKwuSeFQk/xcOMmEFQxJZCnQMqpADaNjDoac6YV61q6A/B9
QsgFzh5fiVuASNmEGVuyWwddO62K5TZqNO97JnYeEwpxniDiuJ3doTEsaYhF
b07mjPpmKAaV3Q5arMOhpK5o4hNMDBDz66TlvZfcmXPL3hppeLNCKRK3UGkw
6TTmVli5KY+Dda0fOZWUSxvmGDwVtYf4B2dPFUOt4iKsR8JJ7HPBB9jtuoL/
dcd0+l5oSbhCLwnGL1E5iKa/shLmWCw/RlV5TiVsiueKMKrqgP2ZoAil3ry9
mHJaz9S+huf7Y6hgjK5G1o/72B4FWYidAkSk8TzmgjgZC5qyLcXMHAW31eTh
u4dMLzqHLnZu5LBJEHLRZCf89b6tGC/gfLSXYhihkNAkpFcUtORzSWF62rjD
tbD1dWihgzVjIXtWgk35P7ENKmnsm9fleLgCXToYUDkfr8gXaxy/uOjG3oT0
t1HNjIpirOfGPY1KfDdME5K7UYXy+HEDnJ+XhMO1Z6mlkZJeTRvidSsnmGIx
6bVkJG9RVWYCile5VHQ1sXAYKPp1ZvcxfCY2CZqycOS41FE1mYYyM+INzXQ6
+4Yf4UsdWqesHCPcRxU0ZE5gMyK3zwq/JZIyoe7ZzBSOW0GmpklU1dU2ypHD
4v2OVPWFMnQSimtqMEVwRA2BcYcxt+4UVOqx5Lw/AfMSrk8dB5RC9Ew79Gmq
NPAgZKAXv78mS/UWB1rGnPZktHU5OOzUIO5VpQ+1inHIK5MUE79ATs57Ue08
xr9i1/knc7BMUXISMTA992M5WLIbMcfAogGOR9hXJvpszMPy9yVhmRnOlIVl
PO8P5mARFK6RsPRvqxqj7NGRRULC9GDNnynZ/z+TqOVxqhOVt/8fJDrxY3+J
6YSc2i27speJTaYW2RyzSR5O/TdAb0IT0vn8L4qTn0JxMjqHkdqCqCtuMXCs
Xq2HEj2LGMEvZmXmyUTZocWIOCO3SXUbpXCZixyy6lbKArPpJXY6agECJ7bU
K5xOCPn/VkXP9BbjMKFXOOTvDhQs5ImnkdGeUVi8cgqL146ohZX0KhlKYFsM
P+P1gB+EEcqaH4/L9KSrEizWliubFQObpbejTU7wV4Q5GvKa4QykyYU2hnOk
+HcOjDLdhZNGDJxYnbl7F7yUDB0XijyUxoJiCY6/dM3MzfPE4vEE0hyFROQx
ppB/gICJYRWANqeBfItQuB8WpMWwJji0zKZQDnNOCp+5rOFMfQ6VX/bJm8gm
Sw4kgiFQl4mZkrHoWg9cijngXLjfSUVuLnlYeVF8SFQvYgPSpD06A25D7If5
bYEzjAU8eWtqPpnaxYUSpgaSYApxCp8gCrI4HQk+zbmezmOYx7YPdYm4o3s1
Tnoth6WlMlzfjiLy1SUcjPZtjwD0fM3dKxaY21ZQ/R0aI4lCxWP6K1sLtyO7
yC2gbnBeokMCrmQW2tERWURlrPHyA9e2qpHCwUEGQ4l1gY20kjfFNCh44OUp
oxWntCbqtp/96OCKYEAe8zGttoXvzlLONbGJborfhfNAtmGgAQnv54uX48M9
7ipiMZTrh9g1yOewgDvh9Sktd72mBm1lL5caXWZQ55SNE7jb5CK2iYOzXPfZ
MtDsUElxCw9ZW/qayCAbJUaIfcHNR+hELWDWiUPw1nrT6KioIZSdtUT3Cw1p
BaYMmjaWlsFF4IoSUgOkT96w5sJgVVZiRmHs2ks95LnqvvJjGP+fbZemoqzZ
JPIoks0jbNRCa7LwbKXgiviui+xcttulhmJzyMkL364BgzEueTDSzb9GJD1N
aI7/lS0osqJDvtOOjODIJIpGCrrFsXZqqyCTKRUivANfWyNrBxFO5He5etA7
KEy1flQR+U6kDYI+aGsGlZEVIQTf2FOBfHar7MhwhBGOuT7fOMaFen31aKJp
pKPkB4sk4GvCBUxmv2BzwZviiYb0yJai8Dg57WupV6abgeeVPaIuoSKxVUlW
HTES70fyEjii4dhrbhXsN0HJb2WuOAYNsFDYCEdyzlpdw8u0kokPJNnGvVin
+El00bK6GYYAtR1XRelSK4592tXZG8pgZp8Kl7FLJsmFimkWXD79koy6Huu/
27qyTcFczC7IAuPjFx4vFrFkSikuifVYGvMGRXBY7Llr9ZhBuVGOOVFLSlX8
AUAAym4j3fyoboXPYag26O08NmnXYrWW2jszwHbtv657Z5U0qdmXtO3jkxsj
1lY4I9dkq6I90K07KkcRJyPRpDc+Q3/nriK5uGIbTMgR/v9LovTdSdM1jmh/
RfU/xSsmMuGe1bMdlITwzBWtUVvUlRRsBYylEqPIrXoELBFA3J7iDTwIeJg7
zmLRPdRuDaV3Bj8rop6KZNZvrZ9LzbyOdDE11J/lUTXaf6F+4BDJpjKSsrkS
aQP0qQ2laD3O9JbSMXYameFONqHJwCRFgPIew5ixoUmzyVOMpqUlmMm7LRtb
3PFL814138gWYjuK1HjqQUDI/ShB5jkxxZiUoyiCJXAoxfDbZ85QJ6TplrOO
3Y4ts4GlGYickmqPcRsKPVT4PE6YZj1orOWQBjoopd4swR6/r6it03y8PTaK
XnhMRePsbJuEoiI6NN6rBOQrdSggWdc+jNAXgUeheMpn2i/fF2xZYG6FPbhR
TIbLen/7bMJNJgYw0eToIV2MjGBW7oikYwAAGiMsxsZoAsp1scUeWo30Ul6i
OHy9K0G3wMl8qvdUKDlzds9SaSsd5OolUj0INuRnVbYAdMRRa+6oypDKAj0X
IZyLXqnBUsCbm3SUdQ0vJ8SPHg1RBs5jWXDdRjkv7LSzkwIh2FwqXhMg4QnH
MG4R32VXRsVLtWGyEialakjpo5Ih9SxpZ5tGbgZRshp02iYx1hrDTm1g6KX1
F0otmPyNVtlVlAiAC3HCBMDqRIcc/pewTihvGBN4wHTeUtyE4YxoEGNDVHb0
6NOfF3dGrTbPgsuF6+qpDYSFN5SFgZ7x9jhnAI859PTqEJ0Cig5es4PnUQSN
gW2xedlrCd+ipfC1+rGvser9kF2lWy7iUJ0Z6kMCggIblnFBiGIVFgZEqaU2
TR6c9SsMh0o0i8OxSTigI1eqxySQFkP6a9EdbknETXr7BGFL7VKunLJWcWMa
NM88aeEed8SGm+MxTe/kZfdIATeO0ZmEtnwVNMEiYtbqny3JKfaKSAY072YI
YmgH4It1zPc5ckYDmk8oBP+U9euTcW3xaxm3i1OmYJf9UwdokpUPRc2e97CC
/NRnhbLSuXWc2tE1o3NvGTWFLGXUo1IpzOtCqXmqZAjW/CjSH2JnQZHoG53S
MNbgx83SUNIiv3ijIlUlzQkzoeADqIL/Fpkb9byoxeiq6smeX/jrJx5rh5OZ
1Uk+QhJUNu+ptbC1ksh0FtEPZUB61vCTZJBtQXYMEt2Ag/5NS+1iZor6xl4x
oXmkpohe+hkaBVQRKCLkg9IKLBqFdSmkbOuzN5Fm7Yg1VmzY5A++wFjhdUaj
XtdWIhygS1K3LVGT0QsoVoJ2gHKimKVEqR6xF6d7fWuVLOI1U1EMBXFwTzE2
p2Hyjqgk1GA8nkxSNnAkizEfqpX9MzdERtnVx7Aec3iLH4gL6AYlhz9YraBB
w/eZtpUwyGVv1qZ47jgr5CVDyQx3T2lex45CxYAdynTJGeNhsFGV1x2KRUW1
MpTj5RFw4B0z35GBGs2svFBRRN/LEhmNYKhBn32Vuoy3gDx0qvKxGk2qj/Am
Z6eVhLSSEG5Y9bBFSpzjVGIN0iM4K8UMNcGSPdEiN4nhcxiPYzBC0SQ5Qtg/
LqKhfiRjc00WOMZBM4UneTQmhpTEPJk/AUZMgXRKopLDQqvSH0GlBzGmUh5l
KiJ9EeejFl0l9YLrMsiIWzB5HE1rYaVeX8BiuFMP3ZgQ6yQlFLXRVGnUVQUw
r6n2ApwVnfGahaRGOaGPxuVd2FWYbqz1AR9XaQUlE+P5eIHFLlwI/Zm4V1Wn
yU+KJmUJG3Y8Q2NQbn4wOA0KU88xjExizwKnP9N6iSVrCE+K2L8xOeNYrhlc
fpY2X8SQFY8sq6SMPMC4Ba+y5Pw0c8yADLHKuUqQ8GOUUSFJoBLFwvS9FPL0
GiCMmVAOyaox7Y7gmIlFqKAzw4BpggUE9UEs59aAUAT6m30WJhSPe3UON8WY
aRb2lXNWLPMh5LGGIAjPQEMcaw+Fckg96rYLn6FsCo+lGvFfjdorML0JBx7D
MkbHqVz1RBTBFSGjQjUrUVb8rVbAZXgJ2lU0J/r5dioz1EIxUpDtTWR3sg52
IS1gaM2MRf97iUqjMMjMj/uo7B+0IGcFdtaAOfrQr5JcOmr0aekd5aRq2MxQ
tzN0+fNLk2btrxEiMwa/1APoEAI2ULpuG4BMi1H2GKPKfB3eCzp/Eiq+Xmkk
OmelfC72WqhaUk95hq9aOdi48A9HcorJXzfntGCLTAlvz+CXhTuVmB+o+DeT
gMaPIgNxEqpQG27qA5vZqCx8X8vqEtQt0h3DA2LBuo8iYOqMhwC9Ro7MUNQO
hqSw6Zx7+lcx3MKubORsY1USCPeIgU/o9iQm12vgPmcwpVq4eMGlhK1XeImu
lKUcqr4/pTgbLdBJGyfiCAXTcG1KLpRBCoQZJc1Fu1Wd2SWkrj83mRQacVIM
fqGEMyKFaTWWtBq8E3wFrl5/fddfZwlR4awacyYrLhFMV4MqYIiSQ4rWZvRX
AX9Hn4lLJxSC7IeeMJylHim2UtVFZHbK2C3OCMNCR7cQ0RVkDS93R1XYPLv7
ykjzCIhJmZPzlGca/uuFOuKzC0aZyEckwGn6iF0FFA4YekWvgLzuQft3M1ok
dZrijmnCKWIOP8qmN5oz1tZUuCBjjA9Bn527+8fAeT0/HGFdDkEpC8wowIEB
rsZtmQWUhWZWzBO8iprfijWGcuxGDM9gvrbkDFIOVqWvF/2tkDiZdoVNI9p+
lG+GEvNySoPa4q6obKKzA1ehOx8tq6sJb69Boi4A5YAEqSPVp8DDjHc4Nh+S
DUdRyy/GeLMlX+/Fyo+IW1lNi6NHomjS9NIDraSKOuvGYwREDTsnjeENCm50
ZHzY3Dpn3NM6K/L1FBA2DVfaKA/IoSsiN4DvgJCoe8QY3480j4GMh7wA/aDV
2OUdA4XGnNbez91CuIzgSjAzEjMTb8cwJ9RwC6OQUQ4WEhfJGiAICDvoXL4r
CQlRjuQGWKBgUgSiwQItBXlIY17YucoTOUGjKysVj6vEjtPhSAk8PBxOeyDB
M+cR79Lx1JH7G5jcMEyesmNnIMVYUqe8esZMNMK8LLQhFef+GuyVpnd1pTQX
FNau6GxQd/hxGUJLAWuwEtm0MtZCTLaHsuFIF+sbrYC7SY1lvEtGJ0ELx/zv
3L8FVrwabNkoMEoptqpGB7Xm5L8oIkGmW1cMuIjrismG6NLAI1y0yE26vOSq
XsR2mSRqkEOMJxGjB562NRhV7+zPCBgx9usIT0QXEt7cEn3/5zNssCbjI+vA
ptoxwSizfgc/4urpqxfMtGzARzCaeu7Ao8qW4b1TLh4nfJfgXt3udoKqJPVM
FYxs9etyGemf9Voo0cZwS+xb9iU5nLrXCnE2Lio1sS7ZigsrXuOUFFfpRYtn
gTeBvJtVh5eE3BcNEpJFTZG4cWkayrO1Oh8ZI7TTmRivM7u+wg2x0M7t6ndn
f7AgDRuaxq7/PbJV1ASmNGAGN2AQuuuYXpKQAaJbmWyuH06bs4pi2Oz6DIar
RbZZr/z+yEwKnFGqKepJkezn48tGmm/9GZsi5Fru2sCaBzY2G7SK5Yp0HWgR
LlEk0CnyVJlThlFxL/5N+zRxZk6SX0mVEce43bt3840046QpWlYpzUAzSyKU
mw39BmUzlwlx8F3acEqGa5ddSRCmlIKi2NGurQJjqIzq90du3ich3a7dVnWK
1eN4lkJnQ+HW3McaDuKKDfwRXt6L2LVxAapVuyrSO4LIRwhUSlnzQPFkx2T3
QneQQ7obbk4kw3Z4Ej58Bf/nodoMe6y76kloLjKRwKl04t9g2Lb1H3BrbFDa
DYLdIQbY6BTU8OlSy81qpmZi1BCCBdi3w7J/ACtG0pc6LcamYA1jrdtS9ZZy
JewMhjWYWF7QZ0hgUa41R4JnEv6ErpWcA0w/Zols7+ekdPU1BQyyMu6GeYQ/
nLQ68Jv8jazV0UNeTN1yDeDMEkf/rATWMYqjDNb/xsmFp1mqv5Vd+PuKma0e
5xYOsYPL5MJarf9vgl1YzEmus3R56gfDzvTnxfc6OnLfdNHQxu1SCPwg/mjX
tjsrmqPcS4CdcIheN2jzr8ByjGFr1gP/i+L456Q4/tKOBc7p6d3y+ZtvwBTW
jqd3jD2mOJunhvEv1ulE4IPXKrLlGVUf21BTFi1iJ7Xdq5e5j8AadA6tbbFY
AxH9Jj396BNKPCkE81gr4R0ZqmbPV1lYe7omB1MTH51ggHHpyw5MDWmvaejA
6pB1nB3T0ebsbERxaqlXilGehpqv/Kq9J55YOrdZ7+AACCPrhTDkQ7ssdwZJ
16IQXWvfWYrHiuygvLyTX3kHvZxMIiqtWOLEKZPy7ABEQhqDxU0qIim88f+C
fx9/9Onyx/z79OOPftAT0iAvbIHV3cXlfz/gF37MP/7CZFSf0h/nfk2P9yFx
CUtPT7GhfoO/v7q7/uE/wHf+t9Gvf3f9gzylKO6W3FwPH3RxIJd/jf9+t7Sk
hz1ldqKP/vrJEtwZ7hb9Nzzl6VJcpL9hPl8siy+s8Zc/aLSIX8yv7bOwtl8u
RRz9DYN5BhMiuSPxo5+2OD/tCz9mtHNL9KUuUfjt1ZNPn9IS/ehT/xPu1Y++
6iwh/o/b4h+21e6fwEQZ6vQfP/kS262n4p9up4LgExCWJHOW1I7+P37CfHGf
/J8ob74hiUb5LvkidlYXcAgxpaEk0wr1UghNJcM6tDspCWst7xuw948KYYQg
GbTKUBrEvrvQoShja4VPltzKFkw2l9sE2cMOZUZCTEEjkq5gJj7g9+sPkK2f
2pZc+DPvrIxLY0LxbIY/C6RZz+hLMQ1RAh04epl/rQh/efRYvG+UH3+Ef/vr
f/2/i/jfy//CJ/mrP4QxPXJqdZB/01fed/A/nX5l/qmT3/7MX/nrf/1/PvRL
79m1ua/w/9aTg5ct/8DFr4xH83MPzP9dOkh/1725tAuP/P7HCle7RbMDkS36
gnBDYA9Ph/kT35YJ9H8eCfR/NoFO1KaSIA9ZUA5U5NLvUYlPjSzCeZPqSvpm
IXRHvX6AsnskbRtmZjxR8l31QV7JgzUraSNoMuzcCm7dxx8d67JJvQUy+D0S
WcvQXgq6Qq3jWkTeydFefS1qjJlvvudbozFo/JmyTFxakhdvGwuDdajm5b0x
/dR24uBzc1OLwlC5syyaPQXdA64/xFFh8XQoNVR8lGKoOAfM5WnOtwUu/Ylj
lu9dInRX4jBp6D/SxRhfhZGUCkf/Q8we+Vh48adjufSpf+yH4n9fvnryBX2x
+OEVsx+wufsL/jv+o8/cFuY4LzTeRUVs4OUP6xt9ng+k+OHuhJDhQSKSo7/D
P6yHc5AR4wI+cPg8rm+f3uJ/X/Bo2NsMlKbFj3mY/hOEQ+p8hZ5+qzNy6qJf
yBf5y6+XX969gLE8teOi1ssWqY+1ucPMEmU8ftMlYlTrwQ/gj5nSm7tntD4W
8CjrHUb79ocCkZBUV/JjnvfyDy/oebHNcuw6rROk1ZAJSlSc1quYmeALmlUx
uzYMwcJvZ199+vTVGxwHuauz3P0/Zd8HzJa3u/OPvD+wxjbgV8mK1n8hA56b
F7WzL17sDsNNkX3sA1/6wWOD/bKXWqqQ3ntxbNmS/GuOzRCR8FKvNZ2MTVfp
7zo2PmLy0jfEiiXjeHzdPv/88586Nvv3afZXf8XoY/jDt3TJfnjsY5ef9oGa
KTObfjMym35zmyk/QqC916x41Gxyo0nl59Sv+/ijlwZH3cY+vW4ZaN5EGnoG
9XCpy6vS2wnnHAZQL49ATBuBAvCAb+zz/sw/tvIoDXYyEW4X1Sy56pmetGRh
y4WwmMQMy8zWSj95GSaa5GMUYs5ZbXVpFqNiCFTdsUUsMaTWtbPrvlF2HiGn
xqETISQCPwcT/WSNYYE8Z4r7Ajn6EIdZ3TOBKiZhjfFH9RHvj/La0VhkCggF
TdiUg0B2zSmF6EdnTNC+4x2yH3USHsZDqIkZyaarIuET6Rw0ZvRt+UhjOGPu
BLudN155TbsK8uXD7L5g802iHeOrPHP+Ltz2mX+T6Nqn+Q/2kR9Ynao449ip
ihf9iG3AD/kPP+ZFHzQWjpbKWIKq+Ncbi/1ltHSqAILcnX7EDyaogLmPvO9F
H3BeMkn870aS+N/dvl9uPip43xA0qVXvM8pBO+gs9QyppFnEqrnHbPouyDMF
K/E+UvZQ8bYqmvmAPeGEO46W/n714snLa4tyOkgGfi30lFbQPsktFcQShDcV
47HIKchhUw3VKhG6JcI2ljfKO0/fdDeDIoe4Uo/nOUyfYYWoPDj+qrdwbiJ6
AAtNmhx/IGIUB5tBAkLptdYi0gt+Oeb23laEK6qFGrgSGorSPoepf+UN/twB
v6MgatCtIUwx6gqrfAnEArs1tffG/iRgM9F1GUNToDudjVdfTBj2WrjZUkfa
lhfGD4Z36zJ4OOeODCUjeiUo0xntD0dd5kP5f14c4yh28k4FJQRycH8Iltsw
SI5i6K7JlOyQ7kbm8PFtC0Shg3QdYGY2WndqeMU9H0cgl5xvbkRQtg4MaVLg
Kd18lUrMWobrWBdaa45/dVvCmI8NqmaNGVw2jOA3xgSZ9fQytpGT9TVjZNO4
gbcUSwxSOU9YHyZ2G7KWGspaRwwQb5P0TlHuzowWcZbBTVsVlE2+RvaGcQGM
IvYyg+x5TnKp5bQE/BX7QU8nhVIW5F8s0D9bsLPMZwN9SQKlc2I8VM8JLRyO
KlZMRUo3YotdpX1Zb6X0lps0W6051ggurPsjPyGwTR04N/9T7RdPx8wZJxPT
ZfqR+X8/r/2id4I+En74iTbD32S//Mxjsb9MjJMPsF+UTbH4e9kv/35kv/z7
20sa6VGr5flgiFJwRrSWhjEMv6aQ8NlFO3sx64F5zsrehRdcl1+T1lQehZHQ
ImxFQNgRxCgLRi9iMxPXTuVU1+g3bopvO6s1sT8a5HLcGpybyOU+j3L85G7R
K+bvLevl3WmHqgFG9IyjfWhxXL26e3bNVz3wbWd23nrk59oKSu0+nafMTtxl
wtS+PjEVCdJ+MzZCNNkjxgchoaLxYUF+FGuhKtoTAbjrCt4akNlU1os3Yc3V
if1+FJ0QxpU/CAJLDBdmETszqlZ2UpFeZF/lKEaqqUL/0VIC1AmKD49X33vy
5qfij95zvR4XpO9Jq83c+J/p3wV0Ry5LPn30x/HDflDhIJGHX3/jG0d//lp7
D/OPXO6IQeSZUNljMx39efrj9GGfXpjHp0X257kfpw/7gUn4kryaknFwSDnc
iP/n2cvXReE/Yv+DSaDUH6aG+Vi1/ISR/czT/AaOpq3w99WXlfzEf37NpVI+
zdd3T+Sv31KtLIZd/1VGdunY/sRpyj8ZtJ8j/rPstf4on7qwm+8Z2Q+y1z98
0Mgu/5ubZvbX/7kPyxbp7zmyn0+e/aw6IDOxfjsysX57O9YCj0fi34OIkC5X
XIVoZYsjmhIEKXQWrM2Rxg9kcRCSUfHmakgoRo4L3AmLttbedtyX/BZMpyfi
Eb5E6ENx9eTlNdpq5AXIr57Sr5ri26NRzOKoAqPp1bd3L6754+KeNyRnmfAD
rK+v0RKYaQjSksHpZX0OHS93DfZ+XatZldO4XGyZE2IAFT1/pmUbRR5yJhvv
fqNhKqZgXRAp7RKR21yKV9KkhsQo/ZsijFl5FXpj5OizYY/YfK3kFussQwly
KqltzmGVqDJPcI1Ox17Fdc0ibsSfIeQz0iGbj9Xd86VTqam1xxTr2mgqq8Zg
ZKPUSNoBIWyM0EaPt1Zjn/lhmv+sdgZwOKbQG5VNpLjTXioSueI2N7HNJHr2
aT1zBjTbYlWZ8MVdSS87HTE8thHCaXsU/H/2RrKWkkJqp41JdgKgF+bYUJIT
yMqtXWvoKyVV5pMKcAXbSPCPp5M2o0keFI+UmFcN3RUpJoltGLZScbfISkss
rrcIxWR+/6XrjTlLgaJHarisy3PeGC7EtWMlBbpze3IJ033FC79CrBMKvxod
MLwyd3XfLpy1hlokiQ/mazqzjoosEIQLlmu/enNd7Fsi9Akjym6I1I3g7/kM
jBaYYzx5t7aVkwRxw0U4V3QxYepYvZd3jgppv4xLQbsFwh2lmqhLuyjzqriS
7FAel1RuFauHuIyEV0pwNZMt8V5sOqAPrCsZcWyyN8edd6jYRj1aJniySmc6
GA3X35SDVZLnHZUCg5reaO2n5kXPXsSG3RICKxscrQjmtq65KsJYbhjXowji
C2eTXcasQmmdSzbMzFIxZznpn9xgwBmBOrFB6IjHwpq8KSoozZ6sA6JHEM1U
n4uZ/iunYd92xEOWRf4xgFuuce0ktupxaypd04BvFnKB31vHmNMKR7Iy5uYs
TtxLUIfc9CAjZ/s2m9jzgxvavI0i75Hi3xogSZVtxvRkBGl44lQ4VDnlkZd0
r0GityViDiX6Ume85zpr3DpOQGnHnCkjquUiJt2Isww/1eGvDpV0fpt0wBEd
5tRu0qsgSlVtbRY62PCpvKNsELf0sgcHBbdw4bcGmeiKZtQC6vK5lkwbsaoy
8Y8ROVGxoTG5yTGWFVyMQmOLUaStvwgEoa6X1F9LOka4fLLy8RYpEg5Vr8pH
eEO6KWuY0P0c5XpSK1dpoWPsuloNvVDQgoyNeT7yesqMTr7UhjdYCdo6CU+4
VkTsQdJ8pmZaUbCkvSe9RRDeoW/yPaI0Q24tCQ+OtfhlwWaGm6UiwSAhXK/0
haJi8SCizaRbj7mzM2bCMSkeMcmRmtFSSrAaSdGlgPENxdR3qkU4SH919893
4CZ4PpompmOmhDRdIljUJ3Ec/pE7+Ah2QPc+yzfFlKnTloFJjpC0ntkV2DE5
MF99GoQBsLNLkbtNzgkwY1mmd9ytYOGLGa4lSJjy2BsLTR+o6SM56Ht6hQQK
ROvhXAYeskAGxPZhcI3wKsRCWB0CReTHLYRy7tgDlVAha/PRWHWs6pt8VOIg
mfTgNk5dhwWLhajddOhE09Ay/24NwpmTnnL+ZxyCSDlFZrKbj16VGyFyEdLv
FWDkSDkaITMDTVeG5SJjjGg3tIg3xybJwmNNfvHXP/+/04H/9c//nfzZjJmK
qXQGaW6U9T2qpF6AzSrp3DJw2vkbmCQMqCZLCT/F8q1nFD7xcxD8/havuxNt
l8FCkpC/EKEZSJ/Wpo0YjuAAqBPKtyWoDZX87tMjTpI1SuBhp5GOkrvqOHoM
4REuoxBoyICALtKyC6YQ10DblDvFsDjrfdqcaqG0c7PEu3+ICuWybXDQ/3SP
8k66Y7H+Y29E+hUFn2rnt/cCvaiYeSta11B2SNtwALPrQDs1GOsT6u89CHI8
bWXDtNHVMCYCXlpABAQnBTrCWtXEyrEj3qOKcQfz3LxBfjMWJOA5fvuMSJg2
uxlYJHyPGtq2SmAV+nB54xk6Mf7jrK+R90ioejezztOGrLL1ZKka6YE2PvOG
F7vU7rryuNd2XEdmOuL9VfOS2Sro+D0wxxRf0ykdqRyOLvDprs9ecm8uYeW9
ycIpRsjFlrBE2ngmvyDn0TnnupxuRi6q3ycjD+S8DIZVqkWfo6G68hClJCCR
/xNrr2IUMXQUn0V25E+QrCBOEZmFUS33JyNFygQuCsDYqdEd6dF6ldpSRsWG
hRnHUUYJR2KsET/6ZUUCl8f5vtgki0a1H41YRI3IcNBIUpRn9oHOrifLWnBr
7pyOqRP5JTOks2OyiHDKXZNFmiKndo0o5wVzV+5O+uO2xGY6sTsITr6utml9
XlMgj3ST0kSqFBO8cFYM9zxqFcY32RIivGXVjiV0NKMDk91iZL0r4akvo2OQ
MiLKwPz8RBqa8e/l5o+7VpsTKwlzNk2Yvcb42cf8thSUkC/irgXtF0OpMlFU
eSGAFGBOWXBR9msEqgpyxZhsrRMFNsPRPtvaT4bcmvi1qqECRXRJVGrDe0HU
8xUDOcfNhJDiDeEK6ywo2RN8DxdfU5bS7qXkTg+hO7bm4mPsxNdMK9tZfXJz
TRwyUYvOtOZ+24Ddk96V64GbinUpmz2XCAgmFOUbqQ1OmjBxWJII9aDWlZbP
4/DogDgrWrZNhjWMLY/luHmzU5wPwvw5QVBu/ohGuO9biYQ4PBYJ2KGQMTZt
sMYY0oahftF5PQZcKu6ZJBxtTHVI/UZa5qMaEbljeKcRimKpDUjcLPSEfIhI
mHkQwsSEndcmmDpD7YHPDqbYQulIKQJDdi3q3LXkO/CrLhjyA6nxvsxWRfql
E5IOUaYAnQIcNfrETJojOzuK3EnUVlptjWOU6EAjE4i6yVLi+lCeJxsZ+ucE
TEifGaqbst+vWmyfyt0k4NZsKFejS1bnEmupAcNcdn9vvYK1m6FbE9Raz4zp
4urZ75+DH3v38rkghLnlbBo3nGBnsfdZmTWbq24BYyOZFJJD7hRVGCQ65x+w
8BepwGKWgXvOxsgGrerv0pnjTkRFceJS6GRid7zS5LJ5bLGX/V2SxspCBuGd
5QYtWZ3uDT2OqJXDx9knixDfyXgV/gTz5ei2kz0aEzo/5z2k68rQbriywI9M
rzXSdQsXSbRTwkPq6HLLGFKHx0ksOBgMVsQeuNC9+53nrYLew+64Zit5bMnf
LUyGI38iNA1AhR14zhwB9voEZ78bW5YMV6ud4YtCjaCccY92dR4zxGuIVd5g
11WDU0DGMIPVE7ghk/ldGY5MlIZ6JuRhUHwiBKWDBRJ5Fp9LQyrJluSReMtS
LkaRDSKKZGgP/KkzWVjAQRGFSVy099UKifs0Isb7cy+FEEq+5sLhd+fciQxJ
w62LxXFiVjs/h9wOUR8QVfM4tT3OYIddifdSOyRLN56Iz/6JNGLo1mGECiUY
Nhp0I7jkt7lwJ4VU1d59THsJMT9nPLC2/dRwhK/v9kR/2iQU0YLee67pMDUh
+qSm8IOdB22MPWjLhp7vq5l2Fh+di22UToZ7oZok460NcA08a1rvbYU99ohI
aTH25pyHbYvbxBjI26K6Zu2tiomF71Ue7GC2hREAFA7P9WdFJQ9QW06+ryAU
+oR85MMzqToITKTCA+6vM4kwSaPacNFnhS/gg+U7pk/YtBQavin5ElcsRDyD
h5l6RJ/Q756+DH2n6vM1wYR7ynac+j6mUuXs3LBD2PXDYrpKi9ABbZCcMzIu
5jvnOTG6+Ys5P87By1lwPVJpiMzqJNDFC8H6RE0ADtPOLkH+XI+0aEw+j0zm
Z8H6zMoo7KjRisIW665gbacvNHaCtYhW/oSZE8Clt0/4a9TyfW4oZFOoLT5o
PWzH9TLfvriTj62CbFVyyGwusrweoWAKWgRhLOauk8U/sho22miXjjwi7NEH
Zr+47NKzK2vYwXXYzjKTvS+IGTlGeeJfenALCSNJIuo3lFnwfFFFxlf6OZV8
1pkjBrEsxyw2HWtZyRept/dQnxUUtpkIxIfEvSbI9+nDs44X6Cis+Obqxcs/
vOivF5NRDFLrdNxj6Q5aKD3zUnNYYa9c8aG4IEw6XqrpjBEAg/ujN4AtA4+V
UL6ESZy5TYs6ataxwxpLE+IpUr4SEsMaOebVR9KCm8KhPi740G+fBbSYlQ61
p409SX559/1r/gN7Cs1Zl43REiH3r6aRCm+p1Yv9hdHky4JtGn3OAxp3XtMN
m/X7BftXqEVy4B11BsBrdGLHOq6KqzlywXk7KWTY1mJDad4YFv3dWTgz+Yjj
KMH6rttVWceoHz5XTonAAIqlzRd/Ytpo+zDxD0n+G2UyIRocQcBdcTJKa/oc
DWWpGVnJH5RbwbdJ4tvqSqrhpvi6VJ0xkiUkw6iHjob7RgGn3srfR99EnEuy
Y5BLRovBYpYzCEWMmdR42JbYKZGmYtQB1OaG4ikzyi16lkjZTKnGsULKB7go
mhJznmSUMOu/tTlQbaQsVaQGuHCxopbjli4x9vSYNvElQo+uPB5Tw6aLvMYO
NQ4Nu5L3dlbplOpzoixKzh0q4k4OLnslb1M6SitybRAbIiQ7ZAhmS4XsI9M3
M/vJuUHGeITgyOJCfE8evpBuCbHYMWaaWbHWVoeaVQ6CIn8yNgTgA3jLOZXD
CD0inrf4ax8b/eatOWSZNQDKayzdY0lasNgNNUUKOHIXCwcppc6hTQeL48/A
dGLHIvuaB2Rg/FWfZVvp7HI+feOhqDdyIpfffPFGzYk+5+QQpgwZJ2n1/bln
PnCMN0uvAh2jzlgiRCqxbcZmA2rj5/mxiEVkwKIC3OGTZBbk+KV38H6woEvR
ISGvKZdxFFi/iAUZGXCSOQT5MJD0aI8c81agmAdynTPbNeFvnr18WazBYMNm
VkFHYSUI45gZxnxMqUN3EP+7cESzHFVsQ4ct5jGWVZW1U0BfffPmm2spi4fl
XWmYR5afaK3V+rEvjdCULPo2xrs3skLR0qsoahzDDmICXzbVjTEnc+beYMtG
cabN+FUNxSwBGegjnlkugtbmIe5pkxYhWpQoHqdWsurvXO7SWYgOYkkjW2pb
YfLDli+uWV/becUHrahkmZba8kwac4hpfj+JSoSOhzy/X2E1sqjkzDzomaPG
Vmq38G+ipjFXPMPXrjS/E6Dw+br8sn+/y8pR0BllONnbfE8RPqSKB2m9F95R
oTxglopy+sbP32v5gjQQDJm6+UlF2Hg9tsYMtqbLnE0pA6ALuGlizIQOMafG
48Z5dl8RBGz3YGZmKDZgDx9hajcFmTgW7cPlokGZXzOd2JwdTllVUNOeivKO
FBlq3us6PTRaNaQoSb6UMWoysgl77U4s8tFxR/DqL9lC8EQ8zeU9xlbUvL58
ccpZVvwZGa5LerJFSa6e0S+u+frwD9zvtkGlLGUCevj1a9rEpePKg1LaDIFE
vC/NI6zFAELklcS2zWP9kEhOmIhf7uULsfib8Q3xYxMFRr5omcRgYai1SbHk
wtm1xEIQ6ZmNKDdOBh8fIgiqCAhg5aWuqj51Zjubey7t8PDAIxeTbhC3wRuk
Atzeytm/B5T0pyOa4xX2ms/xruI70k3mrzPATZ/hypPkH/Y4p2pSvQN98CR9
QON6Dza4zzMzlv6oYNDvK7ClN2rQDX4OJ2sWtpu28mkm/OVb7NfkgA3rT93L
2nCueYg7PyqeoCgsKHcsimCfz28NnNPNOaYNxk7JJY/qyxbL3Bc03Bmw4ExY
Dmdj0I1RypGofTPlrtC1LHo8a7yLNyDt0nWMosVZeGqHzLfpPIm0jKlHNon1
HjbpepAoCt9pLTpzw4ZjJnTJqGbm0Dba1VBMAkbaGifpKyKB1b4RX0iJzIs3
r558cW0BUHXkeqeemQLg8n7y5pfAIBrWiYae8bMnY7LRfOl0rcWdc68KS+nV
GyRrsWFl+V0qtVDTXvyKGqGWf1E/EHEAsl5l2CGhJ9xR3qUysiTZSQlvMrJ6
IP6+Rlq6WZ5ZPbMAgnDNnrcmv5uDmTrNnUb/sjW5M9bYZ8oaCytx9+zazMWv
wEDpDH9QfKsUusXVVy+/vY7dxOWYzbUR58ILvQ6x/iKDOGyq3SGgAbtUhbSs
gRyuXn0N27RNG6ln8T98+fW1NifEe91nf3z99TUjBimlSM1j+OjjX+UQRSST
tU6myieSg6fDCiWYR26LdGzXe9i5FXE49bAwl3spS1R6RnyIAM/NZou+CQeC
flb2hUTm16qDyjU3Eh0EiYDEa4xgY2d9kqETUQTqUMPT2WvIUOjANqnuU5+7
0pkuRqTQvrLDpW0dY+IQLYwhlFZFtBD3fg+NTCuHMWqYgiSc1orGUOCRQsb4
nN7NWhvzFmZPFEgcvwiKhC7A3TN6OpxhuyBUH0Md0gcpZULTJEmmt49YI8b7
pLJ52FNB7jC/r+PcRdgzjCXSDo/dTWY1GdU8BXeHk4x5tH5St6RuM+L7gmkm
kJpMW47hreEm08Enac0UVkT9rJ4oEnxr6tKFo7ml5BUpN0xq9gJl5b8WDmG9
uHZojVm4U73Su1gH8Hu0J9k/vbuWqbVHkYDKYUMe/ywKhCa24Fkt8mv1Yuz8
EgEq18/xCffaZfhIyKjDfaCaUm6srqefgpTCeWNhm+6emxd7gkislfDEs8lX
8FraA2EKYvoenRYRyBs+v5vTWiNN1nEtemJ4tuqUhRDUKtmSYWwAncz6SNq2
3UPTAYfIKrM/lo2HMZRnh5rSVE3NXW+ajbVdcyhntKSFqzWLavqRXKiAc0t7
zirn7ZQYjQkCV+n5idegaZRDhMohUYZms3Xa5vo0RkZqFM2EN80t2Jb2Yo3/
KcJE/YGqG5sKPcLtOowspKPO7KE8zwEBrO+dep6hEd5lKIDXZBlILGtmavYw
58TBltGIo0Mp2nx3srcJvgqbkaufrLEvyYSfGtanlvuMibv7fLvpgAeTb0wu
3ChsfGwult1MuLWSfFuAovCzfy31mPg8qa74NZUcq68zMzJSKNq+2+ml3jco
PlPqG/jA8ixyPsjs7CCJqBoGOMZfU58GaR9bBOvGws88O/nETfFd9nlrkD46
AIQx7ImVE67KRk6rLE4AunisMwY1uLF6FbnOboqZWpvY+pabrsdGVbCm3PVO
mnjg+aFfZPFKX9Fesnf8GwrvskjTuW3axOWM1KaZ0wD4ci3WPoAddxokRCJB
n9B1dnqcbgpJDuC2XRiaLL8j5miQ6HhRI9KZI8iYcRpQkSr2KVReYEOukKZu
6U9hCW7EBQtraprHISrjrKKabaGQzlIUPvCrF0+/fHkd2kfGBG0gHBA1prnC
EPd+PFOYw64eSxZqF+sRC0lA7UgaMZYCsZ7l31mLDA4m+fpxYpDvlFo4mhQU
QcSLxEcDV6QXlNPc2zAUZzh2mQpfADRJJKtfY20erBiYSDR1y+0w+mn0XB1N
l5YZfAy11Eo8l3BCeLfQmFqlcKZf/OGlhC4/Q7SUTFi79uZwNL7Nk+XAuWTC
hwEFWiDARAOD4XSZ8EfPRxKgFqybqUnWucEVyOKhDjuIL82CM6NIEwVlYrNw
aWnzISC0q++Ix8NTJNk57cEZ5uvuXZH7fH/Z/PkMoyqJagIuw4BUqprNQQKP
ABl5HfU0qEiqIChIfZThR66lQhE/cRssGnGwFLATSQW0Vz0eS0GVGbtM0BNc
5+G/cAdPKyoi3cwkR3NV3aSbxfs+JcYLCR4BP8mUaC9v5SwcywFBtiI+tWvM
1VoxPmXft+uKzqQYKFl1ni7ItXQ9Yq1zMeGje2YHH8djd2fUqZ0beooKm2EK
cqaQ7ySCr3lJzvdFqgpP+dHmT2S2kC9Yc/Ss8oNGjA6lbI/cZnvoqHH35O88
AxrkfmzAq2WtIUblewrx/G8eS4LMBer9woxk/1xuhIZ19Yz+e+3BAPUG65ic
iqPCzy+5uJwZVL9jb9JnZPQ9RvU+zUjaiHiNC71x7iTiY8f+06OPZHvGU00x
oDf224JVGhgEcHZ0SN8T9vxOwp7+BXck5uO33z3n+K1aumxxPPVz6DFC8cLB
csBQ4X/Gmiay/m0D3A0YxnFEOv9+5iMLOa3nXCaHJnM5kYChmAvhXM5OMWsK
x3RR4G6zaC5fepMqIvg3Et4lM11kp9o4gf0lyFi5SriMAvzoL5gc7O3CM9eD
inmKOBXtit5OuYSxmIaF4bgLwtQXIkA1zPXoItTa/YXQmhPflD21Kq5QtBjD
Ocs2a/ScX/YjTKtHr8YQJeH7H5HhPxfKfFbGff5nc7Z5J+pIY+eJY7dRKj9Q
UorzNtniDEMWtnLBorsWpm9mXgz9LrBFO28ol9kjSdbI2KKiNPagBFrlTlPu
9y+/1MoLfI/23xDJ2uMnv6fDcbm4ghGNhwNipCdaEd3JJdhe7CpkpER//fNf
2u2QELzAYIm2iWd77+DMCRw+upZreVbghxTYEYbucExd2tYUMUXLWdtR8C2o
YP2/x5YJ4Arfk9vaceMNRl6J4sC6F68Yvx3VN/KCjGMP4nuhZ0iDmHABMvnl
wB53z16i4RXYAyJQEEHi8Eu/DDWZFIq6b+tT4BfQZAccN+pkXyCyiMDNWLO0
r7YaFcSPeURTzTyth+pjaIfvNCeMk1TIKzAWd1uOC1e96NSjXpFlwCTUrsNk
IClGMbuQFUnqVq3EWGsTLTYUipdgqk46lJGvcF+V3QlJu5VHRNN7+W5l/g2d
TpYr2dHkFg0X/qjkGIb2kxRGx6gWRtBIkKpQYiGtz36X1icrqrKs7MI5y9vV
HzmrEnJWAkHg4JlYiZ36mzwXDPqCwCMtUEkhZ9hgEGSIqwIDhPGJn3G8oEdW
ofPoxJWc31feD434Ig0ZrQt9miyOCyu7beG2MlTFxmGH9ZbdtRaXia8HAgra
+p5pMXUZcPH5ThAwjkP9dGdnqtdzNJHsDoY6+gTP7bUEt7w04D4Np6NtK/q1
trVSMWCl0qNmZAjloeygwZrHQXmpeqaHGkvYLApd7697WtN0pF4xEwQMPebC
9lpo17SlWkn9yWgrFae+4AJium9o/I4eqwEjvKMCG0VFYpoXm7Kld0f0Bxst
zjl7VkWaDvha0LWsa+MZYaGKnuzoxVT8ZsyDnwwo9ZJy2n1SwMUGqf0JtgNw
ofLJnKzIUcvlATf/2yZ51TmSU5F5UjJZ59nYcnLGOg1D5gszYEQeOZJKZeBp
iSzsv8Ntva+64URy0umvPBBtzKxwnmn2C6pnSRg8YXGIjCwnIgPTcvHqgBW8
CLIVZgg4fdnKCGPKapoc97OdrdmiCOCCYcw7qNXWVK2BlfH9iaxAbEDkjfvg
6Vw1Q/bPGlad8x4Sv96dSiw7TXxd6bVEvUBEckgGpGEJxM0QXo4pXc3qsfRU
eDIxdjDOLhTKWi0LUVHpqjiCYcx/JxNJm6iWaPGIgfcskAKpCNbaj9U5miVO
uMgxGX7SjdBbaxEm3+CCj9piQouC9yjQWdHbEOnMVC3R8tR2JeVmg+mktBnX
XUmth6TIuaRikW8ZR8XKutrME0kzWibjmhJMIwx018IOsNozHKQzg8ajJua1
kVJRGFLlFLqI4kfA+pwaKsTFOvLm7GD/HMhCBJmEoiTJZu5dpgGUnyovUTNC
cHcJ4IK6I0sY6hm2rUXkmQxMeLauvCoZxaKKD8/NiBerYT9fVTO9XysRnvK/
8HGWdA3/aZ39iYQ5llydhnqM3dmAviV3wYmYlFzPqoFCKQQtL4/j+d03dzNj
YE5cfSpGZpqWPyvl1MxghN+EFe31a3c91byzdpF+6/ei2RnBTbThnIjhnCRC
UwZNa3QNEXdQfHKVlBAaSczQ9EE7hfq0W1FBKOMeVWbTrZF307mpBqG4my+O
NspOpGXkUnu12ITUF8d49xyxPtSfvkAyPmxQUzEBbdVLlQQZ82JdacKwTu8W
UeVrVIzzfNFazlkohC0hojGMkC2EA5XKYcT5O0rYbil1g6cdowDLum3fCu5Z
PMaWdY5x8p9R8HnJIhpPcEmpQt6ZCcaUdBqjiKQjvP59qIYwgg5PubZOa/WL
GAXkzMvCmjKqO2U0rxzMm5WdEVOV77f5WipW+4j1g09g3FetMBcOcqtEDATS
gZH3gDvxe6MbsMETzQanpEZUD6gPdt5zic7akr3NyGinVARIrKXlXUQB1w8R
zOYyaeKrC5luxieRxbSQSADZDKjC8oGS1Mmqq5B+QQkf1OXjVQpnrSeSHoIc
ZogIIxJxvvuMUoRc2kgbYmdIOQv+BxXOdVjhYQEA

-->

</rfc>
