
人工智能模型的发展达到了新的高度,尤其是在效率和性能至关重要的小型语言模型(SLM)方面。在最新的竞争者中,Phi-4 mini 和 o1-mini作为先进高效的模型脱颖而出。在本文中,我们将对 Phi-4-mini 和 o1-mini 进行比较,以了解它们在 STEM 应用程序和编码任务中的用户体验、速度和性能。我们将评估它们在编程、调试和整体效率方面的优势,看看哪种型号的性能更好。最后,您将对哪种型号符合您的需求有一个清晰的认识。
什么是Phi-4-mini?
Phi-4-mini 是最先进的 SLM,专为高性能推理和编码任务而设计。它在效率和准确性之间取得了平衡,是人工智能驱动型应用的有力竞争者。该模型专为高精度文本生成和复杂推理任务而设计,同时计算效率高,非常适合边缘计算环境。
架构概述
Phi-4-mini 是一个密集的纯解码器转换器模型,拥有 38 亿个参数和 128K 标记上下文窗口。它支持 200,064 个 token 的词汇量,并采用了分组查询注意(GQA)技术,在保持高性能的同时优化了资源效率。
分组查询注意(GQA)是一种高效的注意机制,它通过分组查询头和共享键/值头,平衡了多查询注意(MQA)的速度和多头注意(MHA)的质量,从而提高了语言模型的推理速度。
主要特点
- 共享输入输出嵌入:通过重复使用输入和输出嵌入,减少内存开销。
- 训练数据:在 5 万亿个词块上进行训练,包括高质量的教育材料、编码示例和为推理量身定制的合成数据。
- 性能:在推理、数学、编码和指令遵循方面表现出色,并能通过函数调用集成外部 API。
什么是o1-mini?
o1-mini 是一款轻量级、高性价比的 SLM,旨在兼顾经济性和性能。它优先考虑高效处理,同时保持一般人工智能应用的合理精度水平。
架构概述
o1-mini 采用标准 transformer 架构,参数少于 Phi-4-mini(具体尺寸未公开)。它还支持 128K 标记上下文窗口,但侧重于经济高效的处理,而不是像 GQA 这样的架构优化。
模型比较:Phi-4-mini与o1-mini的比较
Phi-4-mini 是专为推理、数学和编码等任务设计的强大模型,而 o1-mini 则采用更简单的设计,侧重于经济高效的编码。下表列出了它们的主要区别:
功能 |
Phi-4-mini |
o1-mini |
架构类型 |
密集、仅解码器 transformer |
标 准transformer(细节有限) |
参数 |
38 亿 |
未指定(一般较小) |
上下文窗口 |
128K tokens |
128K tokens |
注意机制 |
分组查询注意 (GQA) |
未明确详细说明 |
共享嵌入 |
是 |
未说明 |
训练数据量 |
5 万亿字节 |
未指定 |
性能重点 |
在推理、数学、编码方面具有高准确性 |
在编码任务方面具有成本效益 |
部署适合 |
边缘计算环境 |
一般使用,但不太稳健 |
Phi-4-mini 凭借 GQA 和共享嵌入等先进功能脱颖而出,使其在推理、编码和 API 集成方面更胜一筹。相比之下,o1-mini 虽然缺乏 Phi-4-mini 的架构完善性,但它是一种更轻便、更具成本效益的替代方案,并针对编码进行了优化。在两者之间做出选择,取决于在特定任务中是优先考虑高精度和推理能力,还是优先考虑经济性和效率。
推理性能评估
本节将考察 Phi-4-mini 和 o3-mini 模型与较大模型相比在推理方面的表现。本节将重点关注它们在解决复杂问题和做出逻辑结论方面的表现,并强调较小模型和较大模型在准确性、效率和清晰度方面的差异。
Phi-4-mini和o1-mini与较大模型的比较
推理增强型 Phi-4-mini 和 o1-mini 的推理能力通过多个基准进行评估,包括 AIME 2024、MATH-500 和 GPQA Diamond。这些基准测试评估了高级数学推理和一般问题解决技能,为与 DeepSeek、Bespoke 和 OpenThinker 的多个大型模型进行比较提供了基础。
模型 |
AIME |
MATH-500 |
GPQA Diamond |
o1-mini* |
63.6 |
90.0 |
60.0 |
DeepSeek-R1-Distill-Qwen-7B |
53.3 |
91.4 |
49.5 |
DeepSeek-R1-Distill-Llama-8B |
43.3 |
86.9 |
47.3 |
Bespoke-Stratos-7B* |
20.0 |
82.0 |
37.8 |
OpenThinker-7B* |
31.3 |
83.0 |
42.4 |
Llama-3-2-3B-Instruct |
6.7 |
44.4 |
25.3 |
Phi-4-Mini |
10.0 |
71.8 |
36.9 |
Phi-4-Mini (reasoning trained) (3.8B) |
50.0 |
90.4 |
49.0 |
Source: HuggingFace
尽管只有 38 亿个参数,但经过推理训练的 Phi-4-mini 表现出强劲的性能,超过了 DeepSeek-R1-Distill-Llama-8B 等更大的模型:
- DeepSeek-R1-Distill-Llama-8B(80 亿参数)
- Bespoke-Stratos-7B (7B 个参数)
- OpenThinker-7B (7B 个参数)
此外,它还实现了与 DeepSeek-R1-Distill-Qwen-7B 相媲美的性能,而 DeepSeek-R1-Distill-Qwen-7B 是一个更大的 7B 模型,这进一步凸显了它的效率。不过,尽管 o1-mini 的参数大小未公开,但它在多项基准测试中都遥遥领先,成为人工智能推理任务的有力竞争者。
基准测试比较
如图所示,这两个模型的性能突出了它们与大型模型的竞争力:
AIME Benchmark:
- o1-mini 得分为 63.6,是所有模型中最高的。
- Phi-4-mini(推理训练)得分 50.0,比其基础版本(10.0)提高了五倍。
MATH-500 Benchmark:
- Phi-4-mini(90.4 分)略高于 o1-mini(90.0 分),使其在复杂的数学推理任务中非常有效。
GPQA Diamond:
- o1-mini 以 60.0 分遥遥领先,展示了卓越的一般问题解决能力。
- Phi-4-mini(49.0)超过了多个 7B 和 8B 模型,证明了它在推理任务中的效率。
这些结果表明,o1-mini 在一般问题解决和推理方面占优势,而 Phi-4-mini(推理训练型)尽管规模较小(3.8B 个参数),但在数学基准测试中表现出色。这两个模型都表现出了非凡的效率,在关键的人工智能基准测试中挑战甚至超越了更大的模型。
Phi-4-mini与o1-mini:推理和编码能力
现在,我们将比较 Phi-4-mini 和 o1-mini 的推理和编程能力。为此,我们将向两个模型发出相同的提示,并评估它们的响应,我们还将使用 API 来加载模型。以下是我们将在此次比较中尝试的任务:
- 分析构建顺序关系
- 数学逻辑推理
- 查找最长子串
任务 1: 分析建筑顺序关系
本任务要求模型根据给定的限制条件推断建筑物的相对位置,并找出中间的建筑物。
提示词:There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order). V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y. Which is the building in the middle?Options:A) VB) WC) XD) Y”
输入o1-mini
from openai import OpenAI
from IPython.display import display, Markdown
with open("path_to_api_key") as file:
api_key = file.read().strip()
task1_start_time = time.time()
client = OpenAI(api_key=api_key)
There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.
Which is the building in the middle?
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
task1_end_time = time.time()
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
from openai import OpenAI
import time
import tiktoken
from IPython.display import display, Markdown
with open("path_to_api_key") as file:
api_key = file.read().strip()
task1_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """
There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.
Which is the building in the middle?
Options:
A) V
B) W
C) X
D) Y
"""
}
]
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
task1_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
from openai import OpenAI
import time
import tiktoken
from IPython.display import display, Markdown
with open("path_to_api_key") as file:
api_key = file.read().strip()
task1_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """
There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.
Which is the building in the middle?
Options:
A) V
B) W
C) X
D) Y
"""
}
]
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
task1_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
o1-mini响应情况

from transformers import pipeline
from IPython.display import display, Markdown
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True, quantization_config=quantization_config)
task1_start_time = time.time()
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.Which is the building in the middle? Options:
"return_full_text": False,
output = pipe(messages, **generation_args) #,
task1_end_time = time.time()
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
display(Markdown((output[0]['generated_text'])))
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
from transformers import pipeline
import time
from IPython.display import display, Markdown
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True, quantization_config=quantization_config)
task1_start_time = time.time()
messages = [
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.Which is the building in the middle? Options:
A) V
B) W
C) X
D) Y"""},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 1024,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args) #,
task1_end_time = time.time()
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
display(Markdown((output[0]['generated_text'])))
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
pipe(messages)
from transformers import pipeline
import time
from IPython.display import display, Markdown
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-mini-instruct", trust_remote_code=True, quantization_config=quantization_config)
task1_start_time = time.time()
messages = [
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """There are five buildings called V, W, X, Y and Z in a row (not necessarily in that order).
V is to the West of W. Z is to the East of X and the West of V, W is to the West of Y.Which is the building in the middle? Options:
A) V
B) W
C) X
D) Y"""},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 1024,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args) #,
task1_end_time = time.time()
print("----------------=Total Time Taken for task 1:----------------- ", task1_end_time - task1_start_time)
display(Markdown((output[0]['generated_text'])))
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
pipe(messages)
Phi 4-mini响应情况

比较分析
o1-mini 只需几步就能迅速找出正确答案(“V”),而 Phi-4-mini 则需要更长的时间,因为它要一步一步地检查每个细节。即使付出了这么多努力,Phi-4-mini 仍然得到了错误的答案(“Z”),而这甚至不是选项之一。这表明,Phi-4-mini 在处理简单的逻辑问题时很吃力,而 o1-mini 则能快速、正确地处理这些问题。Phi-4-mini的详细思考可能对较难的问题有用,但在这个案例中,它只会造成延误和错误。
任务 2:数学逻辑推理
本任务要求模型识别给定数字序列中的模式,并找出缺失的数字。
Prompt:Select the number from among the given options that can replace the question mark (?) in the following series:16, 33, 100, 401, ?Options: A) 1235 B) 804 C) 1588D) 2006
task2_start_time = time.time()
client = OpenAI(api_key=api_key)
"content": """Select the number from among the given options that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task2_end_time = time.time()
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
task2_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """Select the number from among the given options that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
A) 1235
B) 804
C) 1588
D) 2006"""
}
]
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task2_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
task2_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """Select the number from among the given options that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
A) 1235
B) 804
C) 1588
D) 2006"""
}
]
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task2_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
o1-mini响应情况

task2_start_time = time.time()
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """Select the number from among the given options
that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
"return_full_text": False,
output = pipe(messages, **generation_args) #,
task2_end_time = time.time()
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
display(Markdown((output[0]['generated_text'])))
task2_start_time = time.time()
messages = [
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """Select the number from among the given options
that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
A) 1235
B) 804
C) 1588
D) 2006"""},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 1024,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args) #,
task2_end_time = time.time()
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
display(Markdown((output[0]['generated_text'])))
task2_start_time = time.time()
messages = [
{"role": "system", "content": "You are an expert in solving numerical and general reasoning questions."},
{"role": "user", "content": """Select the number from among the given options
that can replace the question mark (?) in the following series.16, 33, 100, 401, ?
A) 1235
B) 804
C) 1588
D) 2006"""},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 1024,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args) #,
task2_end_time = time.time()
print("----------------=Total Time Taken for task 2:----------------- ", task2_end_time - task2_start_time)
display(Markdown((output[0]['generated_text'])))
Phi 4-mini响应情况

比较分析
在数字模式任务中,o1-mini 的速度和准确性都优于 Phi-4-mini。o1-mini 能快速识别模式,并在 10.77 秒内正确选择 2006。相反,Phi-4-mini 花了更长的时间(50.25 秒),却仍然得到了错误的答案(120)。与此同时,o1-mini 采用了清晰而直接的方法,正确而高效地解决了问题。这表明 o1-mini 更善于快速发现数字模式,而 Phi-4-mini 则倾向于把简单问题过于复杂化,从而导致错误和延误。
任务 3:找出最长的子串
这个问题要求你找出给定字符串中不包含任何重复字符的最长子串的长度。例如,在字符串“abcabcbb”中,不含重复字符的最长子串为“abc”,其长度为 3。
提示词:Given a string s, find the length of the longest substring without repeating characters.Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters.
输入到o1-mini
task3_start_time = time.time()
client = OpenAI(api_key=api_key)
Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters.
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task3_end_time = time.time()
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 3:----------------- ", task3_end_time - task3_start_time)
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
task3_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """
Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters.
"""
}
]
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task3_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 3:----------------- ", task3_end_time - task3_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
task3_start_time = time.time()
client = OpenAI(api_key=api_key)
messages = [
{
"role": "user",
"content": """
Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters.
"""
}
]
# Use a compatible encoding (cl100k_base is the best option for new OpenAI models)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = client.chat.completions.create(
model="o1-mini-2024-09-12",
messages=messages
)
output_tokens = len(encoding.encode(completion.choices[0].message.content))
task3_end_time = time.time()
# Print results
print(completion.choices[0].message)
print("----------------=Total Time Taken for task 3:----------------- ", task3_end_time - task3_start_time)
# Display result
from IPython.display import Markdown
display(Markdown(completion.choices[0].message.content))
o1-mini响应情况
def lengthOfLongestSubstring(s: str) -> int:
Returns the length of the longest substring without repeating characters in the string s.
s (str): The input string.
int: The length of the longest substring without repeating characters.
# Dictionary to store the last positions of each character.
# Initialize the left pointer of the window.
# Initialize the maximum length.
# Iterate over the string with the right pointer.
for right, char in enumerate(s):
# If the character is found in char_map and its last position is >= left pointer,
# move the left pointer to one position right of its last occurrence to avoid duplicates.
if char in char_map and char_map[char] >= left:
left = char_map[char] + 1
# Update/insert the character's latest index.
# Calculate the length of the current window and update max_length if necessary.
current_window_length = right - left + 1
if current_window_length > max_length:
max_length = current_window_length
# Debug statements to trace the internal state (can be removed in production)
# print(f"Current character: {char}, Left: {left}, Right: {right}, Current Window: '{s[left:right+1]}', Max Length: {max_length}")
if __name__ == "__main__":
print(f"Longest substring without repeating characters in '{s}' is of length {lengthOfLongestSubstring(s)}.")
def lengthOfLongestSubstring(s: str) -> int:
"""
Returns the length of the longest substring without repeating characters in the string s.
Parameters:
s (str): The input string.
Returns:
int: The length of the longest substring without repeating characters.
"""
# Dictionary to store the last positions of each character.
char_map = {}
# Initialize the left pointer of the window.
left = 0
# Initialize the maximum length.
max_length = 0
# Iterate over the string with the right pointer.
for right, char in enumerate(s):
# If the character is found in char_map and its last position is >= left pointer,
# move the left pointer to one position right of its last occurrence to avoid duplicates.
if char in char_map and char_map[char] >= left:
left = char_map[char] + 1
# Update/insert the character's latest index.
char_map[char] = right
# Calculate the length of the current window and update max_length if necessary.
current_window_length = right - left + 1
if current_window_length > max_length:
max_length = current_window_length
# Debug statements to trace the internal state (can be removed in production)
# print(f"Current character: {char}, Left: {left}, Right: {right}, Current Window: '{s[left:right+1]}', Max Length: {max_length}")
return max_length
# Example usage:
if __name__ == "__main__":
test_strings = [
"abcabcbb",
"bbbbb",
"pwwkew",
"",
"a",
"dvdf"
]
for s in test_strings:
print(f"Longest substring without repeating characters in '{s}' is of length {lengthOfLongestSubstring(s)}.")
def lengthOfLongestSubstring(s: str) -> int:
"""
Returns the length of the longest substring without repeating characters in the string s.
Parameters:
s (str): The input string.
Returns:
int: The length of the longest substring without repeating characters.
"""
# Dictionary to store the last positions of each character.
char_map = {}
# Initialize the left pointer of the window.
left = 0
# Initialize the maximum length.
max_length = 0
# Iterate over the string with the right pointer.
for right, char in enumerate(s):
# If the character is found in char_map and its last position is >= left pointer,
# move the left pointer to one position right of its last occurrence to avoid duplicates.
if char in char_map and char_map[char] >= left:
left = char_map[char] + 1
# Update/insert the character's latest index.
char_map[char] = right
# Calculate the length of the current window and update max_length if necessary.
current_window_length = right - left + 1
if current_window_length > max_length:
max_length = current_window_length
# Debug statements to trace the internal state (can be removed in production)
# print(f"Current character: {char}, Left: {left}, Right: {right}, Current Window: '{s[left:right+1]}', Max Length: {max_length}")
return max_length
# Example usage:
if __name__ == "__main__":
test_strings = [
"abcabcbb",
"bbbbb",
"pwwkew",
"",
"a",
"dvdf"
]
for s in test_strings:
print(f"Longest substring without repeating characters in '{s}' is of length {lengthOfLongestSubstring(s)}.")
输出

from transformers import pipeline
"content":"""Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters."""},
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
from transformers import pipeline
messages = [
{"role": "user",
"content":"""Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters."""},
]
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
pipe(messages)
from transformers import pipeline
messages = [
{"role": "user",
"content":"""Given a string s, find the length of the longest substring without repeating characters.
Write a function lengthOfLongestSubstring(s: str) -> int that returns the length of the longest substring without repeating characters."""},
]
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
pipe(messages)
Phi 4-mini响应情况
def lengthOfLongestSubstring(s: str) -> int:
for right in range(len(s)):
if s[right] in char_index_map and char_index_map[s[right]] >= left:
left = char_index_map[s[right]] + 1
char_index_map[s[right]] = right
max_length = max(max_length, right - left + 1)
print("Longest substring without repeating characters in 'abcabcbb' is of length ",lengthOfLongestSubstring("abcabcbb"))
print("Longest substring without repeating characters in 'bbbbb' is of length ",lengthOfLongestSubstring("bbbbb"))
def lengthOfLongestSubstring(s: str) -> int:
char_index_map = {}
left = 0
max_length = 0
for right in range(len(s)):
if s[right] in char_index_map and char_index_map[s[right]] >= left:
left = char_index_map[s[right]] + 1
char_index_map[s[right]] = right
max_length = max(max_length, right - left + 1)
return max_length
print("Longest substring without repeating characters in 'abcabcbb' is of length ",lengthOfLongestSubstring("abcabcbb"))
print("Longest substring without repeating characters in 'bbbbb' is of length ",lengthOfLongestSubstring("bbbbb"))
def lengthOfLongestSubstring(s: str) -> int:
char_index_map = {}
left = 0
max_length = 0
for right in range(len(s)):
if s[right] in char_index_map and char_index_map[s[right]] >= left:
left = char_index_map[s[right]] + 1
char_index_map[s[right]] = right
max_length = max(max_length, right - left + 1)
return max_length
print("Longest substring without repeating characters in 'abcabcbb' is of length ",lengthOfLongestSubstring("abcabcbb"))
print("Longest substring without repeating characters in 'bbbbb' is of length ",lengthOfLongestSubstring("bbbbb"))
输出

比较分析
o1-mini 和 Phi-4-mini 都正确使用了滑动窗口方法,但 o1-mini 的代码更有条理,更容易理解。它包括清晰的解释、注释、测试用例和易读的变量名。Phi-4-mini 的解决方案更简短,但缺乏解释和结构,在大型项目中更难理解。o1-mini 的速度更快,解决方案更简洁、更易读,而 Phi-4-mini 更注重保持代码的简短。
总体比较分析
以下是所有 3 项任务的总体比较分析:
对比项 |
任务 1(构建顺序) |
任务 2(数列完成) |
任务 3(最长非重复子串) |
准确性 |
o1-mini 是正确的,而 Phi-4-mini 给出了一个错误的答案(“Z”,这不是一个选项)。 |
o1-mini 正确识别了 2006,而 Phi-4-mini 得到了错误的答案(120)。 |
两者都采用了正确的滑动窗口方法。 |
响应速度 |
o1-mini 明显更快。 |
o1-mini 快得多(10.77 秒对 50.25 秒)。 |
o1-mini 的反应速度稍快。 |
方法 |
o1-mini 采用了快速、合乎逻辑的推理方法,而 Phi-4-mini 则采取了不必要的步骤,但还是出了错。 |
o1-mini 遵循了结构化、高效的模式识别方法,而 Phi-4-mini 则过度复杂化了过程,得到了错误的结果。 |
o1-mini 提供了一个结构严谨、文档齐全的解决方案,而 Phi-4-mini 则采用了一种简洁但可读性较低的方法。 |
编码实践 |
不适用。 |
不适用。 |
o1-mini 包含文档说明、注释和测试用例,因此更易于理解和维护。Phi-4-mini 注重简洁,但缺少文档。 |
最佳用例 |
o1-mini 在逻辑推理任务中更可靠,而 Phi-4-mini 的循序渐进方法可能更适合复杂问题。 |
o1-mini 在数字模式识别方面速度快、准确度高,而 Phi-4-mini 的过度分析可能会导致错误。 |
o1-mini 更适合结构化、可维护的代码,而 Phi-4-mini 更适合短小精悍的实现。 |
小结
总的来说,o1-mini 在结构化推理、准确性和编码最佳实践方面表现出色,更适合复杂问题的解决和可维护代码的编写。虽然 Phi-4-mini 的速度更快,但其探索性方法偶尔会导致效率低下或结论错误,特别是在推理任务中。在编码方面,o1-mini 提供了文档齐全、可读性强的解决方案,而 Phi-4-mini 则以牺牲清晰度为代价,优先考虑简洁性。如果速度是主要考虑因素,Phi-4-mini 是一个可靠的选择,但对于精确度、清晰度和结构化解决问题,o1-mini 则是更好的选择。
评论留言