Five-time honoree continues to build an inclusive, purpose-driven workplace rooted in trust, growth, and shared values. OKLAHOMA CITY (June 17, 2025) – Phase 2, an Oklahoma-based software engineering and AI consultancy, has earned national recognition by being named to Inc. Magazine’s 2025 Best Workplaces list. This annual honor highlights organizations that set the standard for employee engagement, workplace culture, and overall well-being. Phase 2 is proud to be the only Oklahoma company to receive this distinction five years in a row—a testament...

What is the Sequential Thinking MCP Server from Anthropic? The Sequential Thinking MCP Server is one of many Reference MCP Servers released by Anthropic to demonstrate the capabilities of their MCP protocol. This server in particular serves the purpose of providing structure to help augment a given AI’s thinking process. The Sequential Thinking server does not do any of it’s own “thinking” or decomposing of a problem. Instead it deterministically receives structured input from an AI, validates the data in the...

Office Manager Magic, Powered by Multi-Modal AI Introduction AI isn’t a futuristic concept, it’s practical labor, available today. Not just for call centers or simple tasks, but for real, dynamic work that used to require a person. This isn’t about replacing people; it’s about helping teams do more with less. The following example highlights how a LangGraph-powered system is already handling quoting and scheduling—tasks that apply to nearly any business. The bigger point: AI agents are here, and they’re ready to do...

In a recent exploration by Braxton Nunnally from Phase 2 Labs, it was examined how Zep—a memory management tool—can help AI systems retain and recall important information over time. This kind of “organizational memory” allows AI to move beyond one-off interactions and instead offer consistent, informed responses that build on past context. Common Business Pain Point: "Our AI tools don’t retain context or past interactions—users repeat themselves, teams lose knowledge, and we miss opportunities to respond more intelligently." What the Team Learned: AI Needs...

In a recent analysis by Alan Ramirez, Phase 2 Labs explored how organizations can reduce the operational costs of Large Language Models (LLMs) by implementing context caching—a method that stores and reuses the static parts of AI prompts. This strategy minimizes redundant processing, leading to significant cost savings. Common Business Pain Point: “Our AI tools are powerful, but the cost of running them is escalating quickly—especially as usage grows across departments.” What the Team Learned: Understanding Context Caching: By separating static (unchanging) and dynamic...