I wanted to build an autocorrect plugin. So, I started out building a simple node transform. So, I just setup a custom autocorrect node and hardcoded two values which if the editor encounters, it should replace it with another set of hardcoded words. Now, the problem I am facing is that the identification of the hardcoded words isn't consistent.
Only if the first word I type is one of the words I want to be transformed, then it works. If I type those words anywhere else, it's not identifying. So, it's basically not able to identify unless it's the only word I type. So, if you could help me with this anyway possible, that would be really helpful for me.Here is the code for the Custom node and the Plugin:
import { useLexicalComposerContext } from "@lexical/react/LexicalComposerContext";
import { LexicalEditor, TextNode } from "lexical";
import { useEffect } from "react";
import {
$createAutocorrectNode,
AutocorrectNode,
} from "./Nodes/AutocorrectNode";
function autocorrectTransform(node: TextNode) {
const textContent = node.getTextContent();
let replacedNode;
if (textContent === "bikes") {
replacedNode = node.replace($createAutocorrectNode("shumi"));
} else if (textContent === "cars") {
replacedNode = node.replace($createAutocorrectNode("karthik"));
}
replacedNode?.select();
}
function useAutocorrect(editor: LexicalEditor) {
useEffect(() => {
if (!editor.hasNodes([AutocorrectNode])) {
throw new Error(
"AutocorrectNode: AutocorrectNode not registered on editor"
);
}
}, [editor]);
useEffect(() => {
const removeTransform = editor.registerNodeTransform(
TextNode,
autocorrectTransform
);
return () => {
removeTransform();
};
}, [editor]);
}
export default function DictionaryOnHoverPlugin() {
const [editor] = useLexicalComposerContext();
useAutocorrect(editor);
return null;
}
Here is the code of the custom node:
import type { Spread } from "lexical";
import {
type DOMConversionMap,
type DOMConversionOutput,
type DOMExportOutput,
type EditorConfig,
type LexicalNode,
type NodeKey,
type SerializedTextNode,
$applyNodeReplacement,
TextNode,
} from "lexical";
export type SerializedMentionNode = Spread<
{
value: string;
},
SerializedTextNode
>;
function convertAutocorrectElement(
domNode: HTMLElement
): DOMConversionOutput | null {
const textContent = domNode.textContent;
if (textContent !== null) {
const node = $createAutocorrectNode(textContent);
return {
node,
};
}
return null;
}
export class AutocorrectNode extends TextNode {
__value: string;
static getType(): string {
return "autocorrect";
}
static clone(node: AutocorrectNode): AutocorrectNode {
return new AutocorrectNode(node.__value, node.__text, node.__key);
}
static importJSON(serializedNode: SerializedMentionNode): AutocorrectNode {
const node = $createAutocorrectNode(serializedNode.value);
node.setTextContent(serializedNode.text);
node.setFormat(serializedNode.format);
node.setDetail(serializedNode.detail);
node.setMode(serializedNode.mode);
node.setStyle(serializedNode.style);
return node;
}
constructor(value: string, text?: string, key?: NodeKey) {
super(text ?? value, key);
this.__value = value;
}
exportJSON(): SerializedMentionNode {
return {
...super.exportJSON(),
value: this.__value,
type: "autocorrect",
version: 1,
};
}
createDOM(config: EditorConfig): HTMLElement {
const dom = super.createDOM(config);
dom.className = config.theme.incorrect;
return dom;
}
exportDOM(): DOMExportOutput {
const element = document.createElement("span");
element.setAttribute("data-lexical-autocorrect", "true");
element.textContent = this.__text;
return { element };
}
static importDOM(): DOMConversionMap | null {
return {
span: (domNode: HTMLElement) => {
if (!domNode.hasAttribute("data-lexical-autocorrect")) {
return null;
}
return {
conversion: convertAutocorrectElement,
priority: 1,
};
},
};
}
isTextEntity(): true {
return true;
}
canInsertTextBefore(): boolean {
return false;
}
}
export function $createAutocorrectNode(correctedVal: string): AutocorrectNode {
const autocorrectNode = new AutocorrectNode(correctedVal);
autocorrectNode.setMode("segmented").toggleDirectionless();
return $applyNodeReplacement(autocorrectNode);
}
export function $isAutocorrectNode(
node: LexicalNode | null | undefined
): node is AutocorrectNode {
return node instanceof AutocorrectNode;
}
Basically, I want the transform to be working everytime it encounters the hardcoded value. But, it is only recognizing it if and only if the hardcoded value is the only text in the editor.
I was able to realise the crazy assumptions I made when I was trying to get that transform running that way when I found better ways to do text transforms, thanks to:
The approach itself is simple:
Lexical offers a neat utility wrapper called useLexicalTextEntity
which can be imported by doing the following:
import { useLexicalTextEntity } from '@lexical/react/useLexicalTextEntity'
matchingFunction
, NodeType
, createNodeType()
function.With typescript, the hook also accepts a generic which should be the same node type as to which the text node is to be transformed.
The getChapterMatch
function should ideally return a start
offset and and an end
offset.
An example implementation can be as follows:
// example 1
const getChapterMatch = useCallback((text: string) => {
const timeCodeRegex = /(\d{1,2}):(\d{2}):(\d{2}):(\d{2})|(\d{1,2}):(\d{2}):(\d{2})|(\d{1,2}):(\d{2})/g
const matchArr = timeCodeRegex.exec(text)
if (matchArr === null) {
return null
}
const timecodeLength = matchArr[0].length
const startOffset = matchArr.index
const endOffset = startOffset + timecodeLength
return {
start: 0,
end: 0,
}
}, [])
// example 2
const getColoredMatch = useCallback((text: string) => {
const words = text.split(/\s+/);
for (const word of words)
if (COLORS.includes(word))
return {
start: text.indexOf(word),
end: text.indexOf(word) + word.length,
};
return null;
}, []);
Example 1 was inspired by the HashtagPlugin
by lexical and example 2 is the same implementation as shown in the example repo shared earlier.
Both implementations work perfectly fine, either can be used based on the requirement.
The createNodeType()
function might look something like this:
const createChapterNode = useCallback((textNode: TextNode): ChapterNode => {
return $createChapterNode(generateId(), textNode.getTextContent())
}, [])
and finally everything comes together in the useLexicalTextEntity
useLexicalTextEntity<ChapterNode>(getChapterMatch, ChapterNode, createChapterNode)
The complete plugin would look like this:
import { useCallback, useEffect } from 'react'
import { TextNode } from 'lexical'
import { useLexicalTextEntity } from '@lexical/react/useLexicalTextEntity'
import { ChapterNode, $createChapterNode } from './ChapterNode'
import { generateId } from '@/lib/utils'
import { parseTime } from '@/utils'
import { useLexicalComposerContext } from '@lexical/react/LexicalComposerContext'
const ChapterPlugin = () => {
const [editor] = useLexicalComposerContext()
const createChapterNode = useCallback((textNode: TextNode): ChapterNode => {
return $createChapterNode(generateId(), textNode.getTextContent())
}, [])
useEffect(() => {
if (!editor.hasNodes([ChapterNode])) {
throw new Error('ChapterNode: ChapterNode not registered on editor')
}
}, [editor])
const getChapterMatch = useCallback((text: string) => {
const timeCodeRegex = /(\d{1,2}):(\d{2}):(\d{2}):(\d{2})|(\d{1,2}):(\d{2}):(\d{2})|(\d{1,2}):(\d{2})/g
const matchArr = timeCodeRegex.exec(text)
if (matchArr === null) {
return null
}
const timecodeLength = matchArr[0].length
const startOffset = matchArr.index
const endOffset = startOffset + timecodeLength
return {
end: endOffset,
start: startOffset,
}
}, [])
useLexicalTextEntity<ChapterNode>(getChapterMatch, ChapterNode, createChapterNode)
return null
}
export default ChapterPlugin
Note: Remember to register the custom node if defined.