YYGod0120
MYCHATCategories: Project     2024-04-21

ScreenShot

项目截图1项目截图2

Technology

  1. React
  2. Typescript
  3. Tailwind
  4. Axios
  5. Zustand
  6. marked-react,dayjs,file-saver

Highlights

  1. Conversations from Different Corpora: Select corpus themes that are more relevant to the questioner's content, relate to the context, and provide high-quality answers that address the question appropriately.

  2. GPT-style responses GPT streaming answer, automatically convert to md format.

  3. Excellent USER experience: Painless refresh, minimize loading, one-click to export conversations to word , one-click to delete and customize conversation titles.

Difficulties

AI-dialogue

Streaming conversation: For ordinary get or post requests, simply use axios encapsulation to make network requests. As for GPT conversation streaming data, Axios cannot perform post streaming requests based on XHR, so it uses Fetch for data requests instead.

1//基本封装双token无痛刷新
2export const service = axios.create({
3  baseURL: BASE_URL,
4  timeout: 100000,
5});
6service.interceptors.request.use((config) => {
7  const token = localStorage.getItem("access_token");
8  if (token) {
9    config.headers.Authorization = `Bearer ${token}`;
10  }
11  return config;
12});
13
14service.interceptors.response.use(async (response) => {
15  if (response.data.info === "token invalid") {
16    if (response.config.url === "/user/refresh") {
17      localStorage.removeItem("refresh_token");
18      localStorage.removeItem("access_token");
19      localStorage.removeItem("user_id");
20    } else {
21      const newAccessToken = await postRefreshPost({
22        refresh_token: localStorage.getItem("refresh_token"),
23      });
24
25      localStorage.setItem("access_token", newAccessToken.data.access_token);
26      const originalRequest = response.config;
27      originalRequest.headers.Authorization = `Bearer ${localStorage.getItem("access_token")}`;
28      return service(originalRequest);
29    }
30  }
31  return response;
32});
33

For special requests:

1const rep = await new_chat({
2  session_id: id,
3  category: identity,
4  content: words_human,
5}); //new_chat是基于fetch封装的一个数据请求函数。
6const reader = rep.body.getReader();
7const decoder = new TextDecoder();
8
9// eslint-disable-next-line no-constant-condition
10while (true) {
11  const { done, value } = await reader.read();
12  // 结果包含两个属性:
13  // done  - 如果为 true,表示流已经返回所有的数据。
14  // value - 一些数据,done 为 true 时,其值始终为 undefined。
15  const decoded = decoder.decode(value, { stream: true });
16  renderAIRes.current += decoded;
17}
18

Because XHR cannot read post streaming data, Fetch is used instead. As we all know, Fetch returns a built-in Response object when the request is successfully sent, and we can access its body in different formats.

The body of the Response can be a readable byte data stream-ReadableStream

rep.body.getReader()Just create a reader object to lock the stream and call the read() method to read the content.

MD format conversion and typewriter style:Personal blogs use markedto convert html from md and use regular matching to adapt to tsx about next

marked-reactThe advantage of this library is that the underlying layer is not dangerouslySetInnerHTMLimplementation,XSS attacks are avoided and safer. It also supports different syntax highlighting libraries for code highlighting.No need to rub it with your own hands

The difficulty in implementing the typewriter style is to determine whether output is being output and what content is being output, and to determine whether a single output is completed or not to read the next content.

1const renderAIRes = useRef("");
2// data读取
3const decoded = decoder.decode(value, { stream: true });
4setConversation([
5  ...conversations,
6  { HUMAN: words_human, time: askTime },
7  {
8    AI: [
9      {
10        answer: renderAIRes.current,
11        isChatting: false,
12      },
13      { answer: decoded, isChatting: true },
14    ],
15    time: getCurrentTime(),
16  },
17]);
18renderAIRes.current += decoded;
19

isChatting determines whether it is the current streaming data. Non-current streaming data is saved through ref to avoid repeated re-rendering.

1//伪代码实现
2<div className="">
3  {(word as AIType).map((item) => {
4    return item.isChatting ? (
5      <Typist
6        avgTypingDelay={60}
7        cursor={{ show: false }}
8        key={item.answer}
9        onTypingDone={() => {
10          oneContentTypingOver(true);
11        }}
12        className="inline"
13      >
14        <Markdown>{item.answer}</Markdown>
15      </Typist>
16    ) : (
17      <>{<Markdown>{item.answer}</Markdown>}</>
18    );
19  })}
20</div>
21

Select whether to type a typewriter according to isChatting, and then re-obtain new content for rendering after controlling the typewriter.Implement GPT typewriter style.

用户体验

No pain Update:In previous projects, the addition, deletion, and modification of background form data all relied on data requests for page re-rendering. This will lead to a poor user experience. Every time the data is updated, the data must be requested again for rendering. The network requests are large, the page loads slowly, and the user experience is poor.So,heretakingZustandto carry out global data management, obtain data and copy it to local data, and update local data at the same time every time the data is updated. Use local data for rendering to achieve painless page update rendering.

1<Button
2  className="h-[47px] w-[165px] px-3 py-1"
3  style={{
4    backgroundColor: "#8C7BF7",
5    color: "white",
6    border: "none",
7  }}
8  onClick={() => {
9    setIsTaking(true);
10    setSession([
11      ...sessions,
12      {
13        created_at: "",
14        id: newId,
15        metadata: {
16          title: "新对话",
17          category: "",
18        },
19        session_id: `${newId}`,
20        updated_at: "",
21        user_id: "",
22        uuid: "",
23      },
24    ]);
25    handleClick(newId + "", "新对话");
26    handleChooseIdentity(false);
27    setNewId((newId) => newId + 1);
28  }}
29  disabled={talking}
30>
31  创建新对话
32</Button>
33

The creation and deletion of new conversations, as well as the deletion and update of historical conversation records, all rely on local data rather than request data.However, this will also have certain problems, such as large first request volume, resulting in waterfall flow, etc.

RSC (react-serve-component) solves the data rendering problem because it uses the server to generate component binding data and return it to the client for rendering.

Summarize

The project was written in a hurry and was developed by one person, so some components were not encapsulated enough and were basically written together.

Typescript is also rarely used due to the small size of the project and irregular development.

Mainly to solve the difficulties in network requests and data processing, focusing on style and user experience

© 2023 - 2024
githubYYGod0120